Friday, December 26, 2014

Artificial intelligence...coding the heartless machine

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an academic field of study which studies the goal of creating intelligence. Major AI researchers and textbooks define this field as "the study and design of intelligent agents", where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "the science and engineering of making intelligent machines".

I am currently doing research with the objective that someday there will be enough analysis to prepare for the coding and development of the machine of the future, which will be modeled on Robocop - the heartless cop. The basic underlying concept is that although we think in emotions, it is the elimination of distractions that will make the machine more effective, capacity-holding and efficient in processing capability, as it seems sensical to assume that artificial intelligence won't spring up from a mushroom; it has to grow.
Another important obstacle to overcome is the generation gap i.e. how does knowledge pass on from one generation of humans to another and what happens to it along the way, which is the subject of quite a few pages of research in its respect, and, I hope to cover this as soon as I find a sponsoring organization. Information is stored from millions of humans brains connected to the supercomputer, so the engine is able to learn without losing any time, this is the most efficient view of information processing. However we need to do more than store the information, this kind of technology is already available, we need the machine to start learning unsupervised and this is the major innovation that is hoped to be answered through artificial intelligence research.

Use case 1.0 - the disgruntled employee
John Smith had a few years ago been fired by a manager and he still feels remorse towards him, to the point that every time that there is an emotional connection, hate is the emotion that is brought up, and, therefore the logical question is whether to try to clone the brain and turn the hate to forgiveness, and this is a Catholic concept, however the answer to this question seems too simplistic, whilst human emotions are difficult to untangle, whilst, they might be more flexibly re-programmed in artificial intelligence.
Secondly, there is the question of whether stopping hate resolves the problem or removes knowledge altogether, as the brain works by association, therefore the machine would obviously need to have a way to archive information, knowledge without replicating the undesired behavior.
This would have to be taken into consideration when we are writing algorithms for supervised and unsupervised learning. Observations indicate that memories do not go away, they are just bound by synapses with weaker associations, and, the intelligence mechanism needs to improve on the prioritization of the associations based on the goals of the system (sub-system) being programmed. One has to investigate whether a computer based on a neural network (human brain model) is the most appropriate model or whether the imperfection lies within the biological architecture of the human brain that requires structural re-engineering, and, this is the major complexity of this project. It is also important for the scientist to follow professionally endorsed academic research methods including detachment from the emotional state of the programming, and, the most appropriate method is that the original writer of the idea is detached from the technical architect of the project, due to an inherent conflict of interest that is perceived. Other ethics have to be prepared within a code of ethics written for these projects.

Use case 2.0 - surgeon
We do not avail from expertise in medical surgery and therefore, we are looking for people who would want to collaborate. In brief, the robot will be mechanically driven and therefore reduce the risk of fatigue, improper professional decisions and conflict of interests.
Initially, we intend to carry out an analysis to study whether over a longitudinal period of time the human intelligence and its contribution towards scientific research is more effective as a "duty of care" than a supervised or unsupervised learning algorithm that seeks to carry out surgery at a lower cost of maintenance. As with most research projects, research and development costs are expected to be high and these would be balanced by the perceived economics of scale and production over a long period of time, which is the raison d'etre for scaling out this project to a widest possible audience and research institutes. Other useful use cases include industrial research, telecommunications, and a study of the underlying models behind social media network Facebook due to the underlying concept of closely affiliating to how people socialize, with a regard for privacy of research subjects. Privacy can be a concern because in certain cases, it might be required to take a decision on how to store data in a secure manner and confidentially and within the parameters of the laws, whilst still keeping a relation between personal data and research information. Information systems theory leads us to attempt to store data on a reductionist "need to know" basis, although the objectives of the research prevail in the strategic sense. Industry leaders have criticized artificial intelligence in a negative manner as warfare can lead to use of weapons which are lead by irresponsible political leaders:
"If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow." Sourced from futureoflife.org.
System architecture The system would have to use the largest processing power available, however, eventually may have to be deployed on a small processor, therefore, capacity planning is a highly relevant stage of this research.
Inputs shall include a variety of devices including computers attaching themselves to the network - generalized through an Internet connection, and, human beings connected through electrodes that read brain signals. An important issue of capturing all possible signals and interpreting them at source arises as a major quality objective. Technologies with which to interface vary widely with the increasing availability of proprietary and open-source technologies, as knowledge is learnt from stable and unstable technologies, this is the main objective behind machine learning i.e. processing the unknown at unprecedented efficiencies and with superior quality of processing as this is the reason why this project is - in theory - feasible. Processing shall include capturing business logic using established models of software architecture and attempting to create new ones along the research project this is the motivation behind novel projects i.e. the unknown. The application shall include usable user-interface, modern software design, and, professional standards of processing business intelligence for the application processing capabilities, whilst loading information from a pre-fetching database, extracting, transforming and loading it to a multi-location data warehouse. Mirrored data shall be synched. Outputs include storing new knowledge and structured information within various research institutes where client applications might be developed and maintained for further processing information and providing information back to the central brain. Eventually outputs are visible through computer-enabled devices, that may consist of thick terminals (desktop PCs), thin terminals, mobile phones, computers and wearable technologies. The advantage of a multi-location database is that data can be spread to reduce the risk of it being deleted accidentally, although it is assumed that backups are taken by normal systems administration procedures. Different communication protocols have to be interfaced with due to the widespread use of technology that includes a huge history of wireless and wired communication protocols, and, satellite links would be explored, in order to transmit data across different planets, not merely to limit intelligence to our own little planet. Thus the physical features of the computers making up the network would have to be resilient to different planets or else be resident on satellites, allowing for an efficient synchronization giving the impression of real-time processing.
This is the architect's vision of the artificial intelligence project, it is expected to take at least 200 years to complete.
I think that the data architecture should initially be composed of the following data set groupings:
a) a human sample of around 1,000,000 personas using a stratified sample to diversify the psychological influences, age, socio-economic background and demographic factors.
b) "Robocop" sample, which is the independent data set whose intelligence is targeted for situations where emotions are thought to be unproductive or lacking efficiency, for example, in the production of high performance cars.
c) unstructured data that needs to be processed using supervised semi-automated learning and eventually unsupervised learning which is subject to a review by a committee of experts and quality assurance analysts. The above is a textual representation in terms of inputs, processing and outputs, the diagram was not scanned to make life a bit more difficult to people who want to copy this idea without acknowledging my work.
Has supercomputer technology outperformed the human brain? "The brain carrying out twenty quadrillion operations every second is not synonymous with twenty quadrillion memories being stored every second. Rather, data is temporarily stored, in a scratch-pad sort of fashion, and is over-written when no longer needed. Given this level of complexity, how then can we estimate how much artificial memory must be allocated for a machine to simulate brain functions? Paul Reber is a brain researcher. If he is correct, the brain has a memory capacity of 2.5 quadrillion bytes. An average modern human's lifespan is seventy-five years. Reber's figure then implies the average human brain stores thirty-three trillion bytes a year (2.5 quadrillion divided by 75). Using this figure, how much artificial memory would be required to historize the activities of the 100 billion people who have ever lived?" Sourced from an online article.
As of 2015, there are supercomputers which can perform up to quadrillions of floating operations per second (FLOPS), according to an article found online.
I am of course open to questions, but more open to answers, funding and structural help, you may contact me on camilleri.jon@gmail.com.

References
1. Robocop (2014) - http://www.imdb.com/title/tt1234721/
2. Artificial intelligence - http://en.wikipedia.org/wiki/Artificial_intelligence
3. Projection theory and psychology the art of blaming other people - http://en.wikipedia.org/wiki/Psychological_projection
4. Signal processing - http://en.wikipedia.org/wiki/Signal_processing
5. Copyright - http://en.wikipedia.org/wiki/Copyright
6. Model-View-Controller - http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
7. American Association of Neurological Surgeons - https://www.aans.org/About%20AANS/~/media/4A6862BB037742FF99B833D609D23B1E.ashx
8. Data Protection Directive (95/45/EC) and other relevant laws - http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:en:HTML
. 9. Could a robot do my job? Robotics in the UK (BBC)
10. Cryonics: What the nervous system of the roundworm tells us about freezing brains and reanimating human minds.
11. A Post-Human World: Should We Rage, Rage Against the Dying of the Mites?

Wednesday, December 24, 2014

Productivity tips...

An online dictionary defines productivity as the quality, state, or fact of being able to generate, create, enhance, or bring forth goods and services. Management students and professionals aim to achieve a maximization of productivity, efficiency and effectiveness, and, this can be further broken down into variables including cost, quality of work and timeliness of delivery.
Nevertheless, we all know that objective setting is merely the first step towards achieving organizational aims, and, one has to underline common mistakes made by companies in not understanding the effectiveness of their workforce and failing to exploit its potential:
1. Not providing sufficient training and research time is a "no go" for knowledge-based work.
2. Not having a positive attitude and not being an effective leader.
"If you're lucky enough to do well, it's your responsibility to send the elevator back down."
Kevin Spacey quoted at http://www.brainyquote.com/quotes/authors/k/kevin_spacey.html#gyyDjb6SCVQ0MOjf.99
3. Having a toxic personality or ways of leadership which are too conventional, conservative and restrictive may give rise to conflicts with your people and in this respect people management skills are becoming increasingly important until you decide to do away with all humans and invest in capital-intensive robotics within your enterprise. Everyone must realize that "we are all humans" as the President of Malta was quoted to say on public media (Dec 2014). 4. Not respecting the privacy and the personal work-life balance of your team members. There is a clear definition of the contract of work and we should avoid being rude or invasive in our requests, leaving the rest for coffee time, if we are into socializing.
5. Avoid the kind of attitude where you focus solely on your work without contributing to the work that you may be a stakeholder of in the future i.e. do not be short-sighted and collaborate, it is not lost time but time spent in learning the environment.
6. Avoid talking bad about your colleagues especially behind their back, what comes around turns around. Office politics create lack of harmony, as the majority of my previous colleagues realized and some of the consequences can have legal repercussions.

7. Productivity is important, however, one has to take into consideration appropriate documentation and training that allows a task-based, organized manner of working, whether under or without supervision. Performance can be measured at the individual level and at the team level, and, reward-based bonuses might motivate, although money does not resolve all your organizational problems, professional managers need to look beyond and think wide these days.
8. Manage your working tools whether they consist of a toolbox or your email, a search engine helps, and putting effective tips in practice in a standardized and uniform manner may be a good way to start working your way towards achieving ISO certification for quality (e.g. ISO 9001).
9. We tend to believe that inconsistencies confirm the fact that someone else is in error, however, do we ever question our own assumptions?
You may contact Jon on camilleri.jon@gmail.com if you want to avail from more tips and hints on how to manage your day to day productivity, and, for other business consultancy services.

Monday, December 15, 2014

How to file a report for discrimination?

Evidence 1. Job advert as advertised on newspaper or online
2. Your Curriculum Vitae, certificates and copies of correspondence exchanged with the employer or provider of a service
3. Recording of the job interview, remember that you have to advise the person that you are recording them according to the Data Protection laws, if you are keeping personal information about them. If you do not quote that you are citing criminal law, as this is legitimate purpose, and, that you are a private investigator.
See the information at https://socialdialogue.gov.mt/en/NCPE/Pages/NCPE_Home.aspx, you need to fill in a complaint form and an arbitrage process will be initiated. You may be required to witness as well. Different countries have different validation and criteria and some of them are not so clear in their validation requirements, it is useful to ask questions where you feel your complaint has not been handled as you wish it was. A book titled The Face of Discrimination available on Google Books details the qualitative data that need to be available, and, these include details of what happened (facts), an explanation of what happened (response), witness statements - preferably signed and initialized so they are more credible (witness), information on the perpetrators and the contact details (who), ideally a legal endorsement of the facts (legal attorney), and I would add legal references as the claim might be presented in a manner that indicates which laws and policies were breached by the facts or events reported. If you are no longer in a position to write it up, have a lawyer write it up for you, preferably a rights lawyer.