I am currently doing research with the objective that someday there will be enough analysis to prepare for the coding and development of the machine of the future, which will be modeled on Robocop - the heartless cop. The basic underlying concept is that although we think in emotions, it is the elimination of distractions that will make the machine more effective, capacity-holding and efficient in processing capability, as it seems sensical to assume that artificial intelligence won't spring up from a mushroom; it has to grow.
Another important obstacle to overcome is the generation gap i.e. how does knowledge pass on from one generation of humans to another and what happens to it along the way, which is the subject of quite a few pages of research in its respect, and, I hope to cover this as soon as I find a sponsoring organization. Information is stored from millions of humans brains connected to the supercomputer, so the engine is able to learn without losing any time, this is the most efficient view of information processing. However we need to do more than store the information, this kind of technology is already available, we need the machine to start learning unsupervised and this is the major innovation that is hoped to be answered through artificial intelligence research.

Use case 1.0 - the disgruntled employee
John Smith had a few years ago been fired by a manager and he still feels remorse towards him, to the point that every time that there is an emotional connection, hate is the emotion that is brought up, and, therefore the logical question is whether to try to clone the brain and turn the hate to forgiveness, and this is a Catholic concept, however the answer to this question seems too simplistic, whilst human emotions are difficult to untangle, whilst, they might be more flexibly re-programmed in artificial intelligence.
Secondly, there is the question of whether stopping hate resolves the problem or removes knowledge altogether, as the brain works by association, therefore the machine would obviously need to have a way to archive information, knowledge without replicating the undesired behavior.
This would have to be taken into consideration when we are writing algorithms for supervised and unsupervised learning. Observations indicate that memories do not go away, they are just bound by synapses with weaker associations, and, the intelligence mechanism needs to improve on the prioritization of the associations based on the goals of the system (sub-system) being programmed. One has to investigate whether a computer based on a neural network (human brain model) is the most appropriate model or whether the imperfection lies within the biological architecture of the human brain that requires structural re-engineering, and, this is the major complexity of this project. It is also important for the scientist to follow professionally endorsed academic research methods including detachment from the emotional state of the programming, and, the most appropriate method is that the original writer of the idea is detached from the technical architect of the project, due to an inherent conflict of interest that is perceived. Other ethics have to be prepared within a code of ethics written for these projects.
Use case 2.0 - surgeon
We do not avail from expertise in medical surgery and therefore, we are looking for people who would want to collaborate. In brief, the robot will be mechanically driven and therefore reduce the risk of fatigue, improper professional decisions and conflict of interests.
Initially, we intend to carry out an analysis to study whether over a longitudinal period of time the human intelligence and its contribution towards scientific research is more effective as a "duty of care" than a supervised or unsupervised learning algorithm that seeks to carry out surgery at a lower cost of maintenance. As with most research projects, research and development costs are expected to be high and these would be balanced by the perceived economics of scale and production over a long period of time, which is the raison d'etre for scaling out this project to a widest possible audience and research institutes. Other useful use cases include industrial research, telecommunications, and a study of the underlying models behind social media network Facebook due to the underlying concept of closely affiliating to how people socialize, with a regard for privacy of research subjects. Privacy can be a concern because in certain cases, it might be required to take a decision on how to store data in a secure manner and confidentially and within the parameters of the laws, whilst still keeping a relation between personal data and research information. Information systems theory leads us to attempt to store data on a reductionist "need to know" basis, although the objectives of the research prevail in the strategic sense. Industry leaders have criticized artificial intelligence in a negative manner as warfare can lead to use of weapons which are lead by irresponsible political leaders: "If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow." Sourced from futureoflife.org. System architecture The system would have to use the largest processing power available, however, eventually may have to be deployed on a small processor, therefore, capacity planning is a highly relevant stage of this research.
Inputs shall include a variety of devices including computers attaching themselves to the network - generalized through an Internet connection, and, human beings connected through electrodes that read brain signals. An important issue of capturing all possible signals and interpreting them at source arises as a major quality objective. Technologies with which to interface vary widely with the increasing availability of proprietary and open-source technologies, as knowledge is learnt from stable and unstable technologies, this is the main objective behind machine learning i.e. processing the unknown at unprecedented efficiencies and with superior quality of processing as this is the reason why this project is - in theory - feasible. Processing shall include capturing business logic using established models of software architecture and attempting to create new ones along the research project this is the motivation behind novel projects i.e. the unknown. The application shall include usable user-interface, modern software design, and, professional standards of processing business intelligence for the application processing capabilities, whilst loading information from a pre-fetching database, extracting, transforming and loading it to a multi-location data warehouse. Mirrored data shall be synched. Outputs include storing new knowledge and structured information within various research institutes where client applications might be developed and maintained for further processing information and providing information back to the central brain. Eventually outputs are visible through computer-enabled devices, that may consist of thick terminals (desktop PCs), thin terminals, mobile phones, computers and wearable technologies. The advantage of a multi-location database is that data can be spread to reduce the risk of it being deleted accidentally, although it is assumed that backups are taken by normal systems administration procedures. Different communication protocols have to be interfaced with due to the widespread use of technology that includes a huge history of wireless and wired communication protocols, and, satellite links would be explored, in order to transmit data across different planets, not merely to limit intelligence to our own little planet. Thus the physical features of the computers making up the network would have to be resilient to different planets or else be resident on satellites, allowing for an efficient synchronization giving the impression of real-time processing.
This is the architect's vision of the artificial intelligence project, it is expected to take at least 200 years to complete.
I think that the data architecture should initially be composed of the following data set groupings:
a) a human sample of around 1,000,000 personas using a stratified sample to diversify the psychological influences, age, socio-economic background and demographic factors.
b) "Robocop" sample, which is the independent data set whose intelligence is targeted for situations where emotions are thought to be unproductive or lacking efficiency, for example, in the production of high performance cars.
c) unstructured data that needs to be processed using supervised semi-automated learning and eventually unsupervised learning which is subject to a review by a committee of experts and quality assurance analysts. The above is a textual representation in terms of inputs, processing and outputs, the diagram was not scanned to make life a bit more difficult to people who want to copy this idea without acknowledging my work.
Has supercomputer technology outperformed the human brain? "The brain carrying out twenty quadrillion operations every second is not synonymous with twenty quadrillion memories being stored every second. Rather, data is temporarily stored, in a scratch-pad sort of fashion, and is over-written when no longer needed. Given this level of complexity, how then can we estimate how much artificial memory must be allocated for a machine to simulate brain functions? Paul Reber is a brain researcher. If he is correct, the brain has a memory capacity of 2.5 quadrillion bytes. An average modern human's lifespan is seventy-five years. Reber's figure then implies the average human brain stores thirty-three trillion bytes a year (2.5 quadrillion divided by 75). Using this figure, how much artificial memory would be required to historize the activities of the 100 billion people who have ever lived?" Sourced from an online article.
As of 2015, there are supercomputers which can perform up to quadrillions of floating operations per second (FLOPS), according to an article found online.
I am of course open to questions, but more open to answers, funding and structural help, you may contact me on camilleri.jon@gmail.com.
References
1. Robocop (2014) - http://www.imdb.com/title/tt1234721/
2. Artificial intelligence - http://en.wikipedia.org/wiki/Artificial_intelligence
3. Projection theory and psychology the art of blaming other people - http://en.wikipedia.org/wiki/Psychological_projection
4. Signal processing - http://en.wikipedia.org/wiki/Signal_processing
5. Copyright - http://en.wikipedia.org/wiki/Copyright
6. Model-View-Controller - http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
7. American Association of Neurological Surgeons - https://www.aans.org/About%20AANS/~/media/4A6862BB037742FF99B833D609D23B1E.ashx
8. Data Protection Directive (95/45/EC) and other relevant laws - http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:en:HTML
. 9. Could a robot do my job? Robotics in the UK (BBC) 10. Cryonics: What the nervous system of the roundworm tells us about freezing brains and reanimating human minds. 11. A Post-Human World: Should We Rage, Rage Against the Dying of the Mites?