Sunday, May 30, 2010

The semantic web - the future of the www

The internet as we all knows it, the Web 2.0, was designed with one thing in mind - people. The web is all about the social aspect of the internet: it was designed in order to help people communicate and share information. The Semantic Web, on the other hand, is designed for machines. While the Web needs a human operator to run it, by using computer systems, it is not possible for a computer to do tasks such as search for information without a human to guide it.

So what is exactly "the semantic web"?
The semantic web is actually an evolving extension of the World Wide Web that adds new data and metadada to Web documents. The idea in general is that web content will be expressed not only in natural language, but also in a format that can be read and used by software agents, thus permitting them to find and integrate information more easily. In other words the Semantic Web focus is to change the focus from people communicaton tool to computer understanding availability platform. This extension is what will soon allow machines to process data on its own or manually. With this being successful, the need of humans to help operate computers would be eliminated, or at the very least minimized.

There are already several examples of Semantic Web potential are several applications that are already in use today:

FoaFA
Popular application of the semantic web is Friend of a Friend (or FoaF), which uses RDF to describe the relationships people have to other people and the "things" around them. FOAF permits intelligent agents to make sense of the thousands of connections people have with each other, their jobs and the items important to their lives; connections that may or may not be enumerated in searches using traditional web search engines. Because the connections are so vast in number, human interpretation of the information may not be the best way of analyzing them. FOAF is an example of how the Semantic Web attempts to make use of the relationships within a social context.

Twine
Twine claims to be the first mainstream Semantic Web app. Twine automatically learns about you and your interests as you populate it with content - a "Semantic Graph". When you put in new data, Twine picks out and tags certain content with semantic tags - e.g. the name of a person. An important point is that Twine creates new semantic and rich data. But it's not all user-generated. They've also done machine learning against Wikipedia to 'learn' about new concepts. And they will eventually tie into services like Freebase.

For additional applications you can visit: http://www.readwriteweb.com/archives/10_semantic_apps_to_watch.php

In my opinion the next important IS developments will be related to the efficacy of Information processing rather than Information gathering or storage. In addition, I think that the next important Information processing leap will be related to web intelligence and specifically, in the near future, to Semantic Web. This tendancy is already shown in the large investments volume of the AI comapnies in the area as well as to the real demand and popularity to some of the preliminary applets. I also think that the development of an automatic computerized system that will successfuly used semantic coding-retrieving system will change the whole way we know and research the IS field today.
The following video explain elaboratively and demonstrate possible application of the semantic web:

Tuesday, May 25, 2010

Waves of Innovation

In my last post I was trying to describe an analogy between technological innovations cycle and the Scientific revolutions cycle. Recently, I have found out that some of elements of this analogy have already been described in an article by Giovanni Dosi (1982): "Technological paradigms and technological trajectories: A suggested interpretation of the determinants and directions of technical change".

In the paper the author also stressed the similarity of the procedures and the nature of “technologies” with those of sciencentific research. In particular, the author coined a new term: “technological paradigms” (or research programmes) that are performing a similar role to the “scientific paradigms” (or research programmes) and proposed a model based on this similarity.

The model Dosi proposed, tries to account for both continuous changes and discontinuities in technological innovation. Continuous changes are often related to progress along some technological trajectory which defined by a technological paradigm, while discontinuities are associated with the emergence of a new paradigm (Paradigm shift). Dosi claim that the origin of a new tecnological paradigm stems from the interplay between scientific advances, economic factors, institutional variables, and unsolved difficulties on established technological paths.

The differentiation between continuous innovation and discontinuous innovation may be positive for understanding initiation of a new paradigm as well as position and diffusion of a specific technology or knowledge. For example, the figure below (Hargroves and Smith (2005)) shows six waves of Sci-Tech innovation between 1785-2020, which can also be regarded as six different paradigms. A continuous innovation is what happened in the same wave, while discontinuous innovation is the jump from one wave to the next wave.







Sources

Dosi, G. (1982): Technological paradigms and technological trajectories: A suggested interpretation of the determinants and directions of technical change, Research Policy, 11 (3), pp.147-162.


The Natural Advantage of Nations: business opportunities, innovation, and governance in the 21st century. K Hargroves, MH Smith (2005). page 17

Friday, May 21, 2010

The Innovative cycle and The structure of scientific revolutions

What are the basic steps and processes towards new innovation? According to the innovative cycle there are two main parallel processes: Exploration and Exploitation.

The main idea is that the process which includes the initiation of new ideas, the creation and development of new inventions and the utilization them, have a form of a cycle that include two main spheres/directions:

Exploration - Creating new patterns, inventing new technologies. Include things captured by terms such as search, variation, risk taking, experimentation, play, flexibility, discovery, innovation, long term.

Exploitation - Optimizing an existing pattern by making a small steps. Include things as refinement, choice, production, efficiency, selection, implementation, execution, short-term, immediate, certain benefits.

An organization, such as a firm, a government or a political party has to choose how much of their resources to allocate to each of these activities. The innovation cycle is used broadly to describe the strategy of which a company is choosing in order to balance the need for new innovations with the urge to improve the existing ones and fully exploit their potential (An excellent example of the continuous rivalry between the two strategic needs of Pfizer).



This model reminds me some of the ideas described by Thomas Khun on his famous book: "on the structure of scientific revolutions". In his book (published in 1962) Khun made an analysis of the history of science and was trying to establish a model of the basic mechanism underlying the progress and revolutions in science. In his book, Kuhn's argues that the evolution of scientific theory does not emerge from the straightforward accumulation of facts, but rather from a set of changing intellectual circumstances and possibilities. In fact, the basic mechanism described by

Khun includes three main phases of progress:

The first phase, which exists only once, is the pre-paradigm phase, in which there is no consensus within the scientists on any particular theory, though the research being carried out can be considered scientific in nature. If the scientific community eventually gravitate to one of these conceptual frameworks and ultimately to a widespread consensus on the appropriate choice of methods terminology etc, then the second phase, normal science, begins.

In The second phase puzzles are solved within the context of the dominant paradigm. As long as there is general consensus within the discipline, normal science continues. Over time, progress in normal science may reveal anomalies, facts that are difficult to explain within the context of the existing paradigm. While usually these anomalies are resolved, in some cases they may accumulate to the point where normal science becomes difficult and where weaknesses in the old paradigm are revealed. Kuhn refers to this as a crisis, and they are often resolved within the context of normal science.

However, after significant efforts of normal science within a paradigm fail, science may enter the third phase, that of revolutionary science, in which the underlying assumptions of the field are reexamined and a new paradigm is established. After the new paradigm's dominance is established, a process known as Paradigm shift, scientists return to normal science, solving puzzles within the new paradigm. A science may go through these cycles repeatedly, though Kuhn notes that it is a good thing for science that such shifts do not occur often or easily. In order for a Paradigm shift to occur, Khun claim that a new young, unbiased, enthusiastic scientists needs to enter the field. The reason is that a fresh new innovative ideas could only grow in a clean-of the old paradigm minds…




The similarity lines between the Normal Science era to the Pattern Optimizing stage and of the Scientific Revolution to the Pattern Creating stage are clear. In order for a new idea/invention to come there is sometimes a need for a Paradigm shift something that is based on the accumulated subtle signs of “anomalities”.


Sources
http://www.des.emory.edu/mfp/kuhnsyn.html
http://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions
http://www.mit.edu/~pjl/page2/files/exploration_exploitation.pdf
http://visualsignifier.com/kuhnhome.html

Monday, May 10, 2010

Artificial Creativity

Last IS class we discussed the main charachteristics of the innovative process and tried to identify, define and modelized the main stages of it. Although the innovation process holds a substantial significance to the implemetation phase - the team work, the prototyping and adjusting the idea in regard to feedbacks - , it seems to me that the most important step in the innovation process is actually the idea initiation.

In my opinion, the ability to come up with a new idea or finding a new solution for a problem is tightly linked to the question on the roots of creativity. Most of the people tends to see creativity as a gift or as something that people either have it or not, and even as something that cant really be learned but only to be developed assuming one has the right "genes"for it. This train of thoughts led many to the assumption that creativity couldnt be modelize and therefore that it is impossible to create a creative machine - a computer- that could produce brand new ideas or products.


However, along the years this assumption proved to be more and more questionable and today abundant of examples exist that proves it to be wrong (at least to some extent). The main examples comes from the relatively new field of Artificial Creativity. Artificial Creativity (or computational creativity) is a branch of Artificial Intelligence that deals with the development and exploration of systems that exhibit creative behavior. This includes systems capable of such things as scientific invention, visual artistry, music composition and story generation.

Here are some of the famous examples for Artificial Creativity:
  • Computer-robot that paints original paintings by himself: Created by Harold Cohen, "Aaron" is a AI-based program (robot-artist) that actually creates original paintings each one completely different. Aaron paintings are so amazing that if a human created paintings like AARON, we would regard him or her as an acclaimed artist. Indeed hard copies of AARON paintings have hung in museums around the world (London's Tate Modern Galley, Amsterdam's Stedelijk Museum, San Francisco Museum of Modern Art, Brooklyn Museum, and Washington Capital Children's Museum, to name a few).
    here are some examples of his original paintings:



    http://www.scinetphotos.com/auction.html

    http://www.stanford.edu/group/SHR/4-2/text/cohen.html

  • Computer that compose original music - Experiments in Musical Intelligence is an original and provocative study of computational creativity in music. David Cope, meriti Professor at the University of California at Santa Cruz, asks whether computer programs can effectively model creativity—and whether computer programs themselves can create. Defining musical creativity, and distinguishing it from creativity in other arts, Cope presents a series of experimental models that illustrate salient features of musical creativity. In his web site you can actually listen to some pieces of music composed by a computer in here: http://artsites.ucsc.edu/faculty/cope/mp3page.htm



  • Chatting? - A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) is a natural language processing chatterbot - a program that engages in a conversation with a human by applying some heuristical pattern matching rules to the human's input. It is one of the strongest programs of its type and has won the Loebner Prize, awarded to accomplished humanoid, talking robots, three times. However, the program is still unable to pass the Turing test, as even a casual user will often expose its mechanistic aspects in short conversations. Why dont you try chat with Alice? :



Many more chatterbots available today, including Fred the psychonalyst... Some of the famous could be find in this link: http://www.simonlaven.com/fred.htm

  • Poems:
"More than iron, more than lead, more than gold
I need electricity.
I need it more than I need lamb or pork
or lettuce or cucumber.
I need it for my dreams."
(Racter, from The Policeman's Beard Is Half Constructed)

Well.. not exactly Shakespeare but imperssive enough considering that it was made by a computer... Here are some more examples of poems written by Ray Kurzweil’s Cybernetic Poet:

Page

Sashay down the page

through the lioness

nestled in my soul.

(A Haiku written by Ray Kurzweil’s Cybernetic Poet after reading poems by Wendy Dennis)

Imagine Now And Sing

Imagine now and sing,

creating myths

forming jewels from the falling snow.

Soul

You broke my soul

the juice of eternity,

the spirit of my lips.


I Think I'll Crash

I think I'll crash.

Just for myself with God

peace on a curious sound

for myself in my heart?

And life is weeping

From a bleeding heart

of boughs bending

such paths of them,

of boughs bending

such paths of breeze

knows we've been there

  • Telling a joke? At Edinburgh University, AI researcher Kim Binsted has developed Jape-1, a program for telling jokes. The program builds up the jokes according to simple 'templates', such as 'What do you get if you cross an X with a Y ?', and chooses words for X, Y and the pay-off word Z according to properties of the words, such as their sound and associations. Can you spot the Jape-1 jape, and the two from The Crack-a-joke Book by human joke-merchant Kaye Webb?

A - What do you give a hurt lemon? Lemonade.
B - What kind of tree can you wear? A fir coat.
C - What runs around a forest making other animals yawn? A wild boar.



Those examples led me to the question: Is it possible to have an artificial innovators? in other words, would it be possible to have a computerised system that will be creative and innovative in a sense of being able to come up with totaly different ideas and to invent actually new processess or new tools?

*B is by Jape-1

Sources

http://www.thinkartificial.org/artificial-creativity/
http://www.thinkartificial.org/category/artificialcreativity/

http://www.thinkartificial.org/aesthetics/absolut-machines/

Wednesday, April 28, 2010

The other sides of the napkin - On visualization and other mental represntations

Last IS class we discussed the great effect that visualization has on our perception. In short, it has been suggested that simple drawings, sometims on the other side of the napkin, could explain more effectively, intuitively and rapidly complicated things that sometime could be too trying to explain them using only words.

That approach, although sounds very compelling, assume that we are all coding and decoding our senses information more easily and effectively using visualization. Meaning, when we hear someones story about a carriage full of cabbages we immediatly picturizing it in our minds (the brain create simulations of the story). And therefore the information is easy to retrieve by using relevant visual cues.


But, are we all using the visual modules so extensivelly? Is visualization is always the preferable or default way of codint and decoding the environment?


According to the NLP theory (Neuro linguistic Programming) People may have and use different Representational systems (also known as sensory modalities) as their lead infromation processes mechanism. The main representation system is linked to one the senses - Visual, Auditory or Kinesthetic. The other two senses, gustatory (taste) and olfactory (smell), which are closely associated, often seem to be less significant in general mental processing, and are often considered jointly as one.
The lead representational system is reflected by the way people are tends to think to themseleves and by their everyday language. For example, Einstein credited his discovery of special relativity to a mental visualization strategy of "sitting on the end of a ray of light", and many people as part of decision-making talk to themselves in their heads.
In this way people may choose different way of expressing their ideas


The one that use Visual representational system as his prefered one may say:
  • "I see what you mean"
  • "You have shown me a bright idea on how to proceed and I would like to look into it further".
  • and will probably use more of: see, look, bright, clear, picture, foggy, view, clear, focused, dawn, reveal illuminate, imagine, hazy, an eyeful, short sighted, sight for sore, eyes, take a peek, tunnel vision, bird’s eye view, naked eye, paint a picture
On the other hand, the Auditory one may prefer to say
  • "You have told me of a way to proceed that sounds good and I would like to hear more about it."
  • “I can’t hear what you are saying”
  • “This doesn’t sound right.”
  • And use more of: hear, tell, sound resonate, listen, silence, deaf, squeak, hush, roar, melody, make music, harmonize, tune in/out, rings a bell, quiet as a mouse, voiced an opinion, clear as a bell, give me your ear, loud and clear, purrs like a kitten, on another note
Kinethetic -
  • "You have handed me a way to proceed that is on solid ground and I would like to get more of a feel for it. "
  • “I can’t grasp what you are saying”
  • “I don’t have a feeling for this.”
  • Common/ prefered words in use: grasp, feel, hard, unfeeling, concrete, scrape, solid touch, get hold of, catch on, tap into, heated argument pull some strings, sharp as a tack, smooth operator, make contact, throw out, firm foundation, get a handle on, get in touch with, hand in hand, hang in there
The actual thing behind it is that the way we communicate with each other can become far more effective if we only pay attention to the represntational system of the other person and use the language that fit that system.




In my opinion, although most of the people representational systems are indeed visual one (around 50-60%), there are many others (40-50%) that are using different way of decoding information, and therefore it could be even more efficient to use different ways other than drawing visualizing your ideas and try to use more auditoric ways and words to appeal, create rapport and influence your listener.

Monday, April 26, 2010

Subliminal influece

Its all about Information isnt it? Everything meaningful to us only if we can grasp it.
Not neccessarily being able to consciously thinking about it but only to percieve it with our senses. In other words if there is something out there that is not able to facilitate any electrical currents in our brain - its just does not exist. Well... at least at the moment it sounds logical and reasonable to us to assume that...

But what about all the tons of information that we do absorb everyday and that does not enter our conscious mind? All the colours and pictures wee see every day and all the conversations we are forced to hear... Are we saving all this information somewhere? Could it be of important to us in the future? and if so, how you/your brain decide what is important and what is not?

Obviously Perception is not the same thing as noticing. We can perceive things and respond to them, without noticing their effect, or even their existence. Why should this be possible ?
Our brain didn't evolve to serve strictly as a faithful recorder of images, but as a guidance system for our survival. As a result, it records things as faithfully as is useful, and distorts them when it is not useful (or whenever he retrieve a memory). Our sensory systems evolved to be very flexible in interpreting what we perceive, so that we can respond quickly and adaptively to changes around us.

So yes, the brain is forced to store tons of uneeded information (and data) of which we constantly being exposed to. So the most interesting question is - can all the information we percieve without notice influence our future desicions and judgements that we will make?

The answer is absolutely yes! And its influence on our behaviour is way more than we intuitively assume. all under the what is known as Subliminal perception/stimulation/influence.
It seems that there are at least five basic phenomena of subliminal stimulation:

1. Mere Exposure Effect -- Exposure to an image without awareness predisposes us to prefer that image over others.

2. Poetzl Effect -- Words or images perceived without awareness appear in altered form in imagery and dreams some short time later.

3. Affective Priming -- Exposure to an emotionally compelling image without awareness causes us to respond emotionally without knowing why.

4. Semantic Priming -- Exposure to a word without awareness tends to bias our perception of subsequent words for a fraction of a second.

5. Psychodynamic Activation -- Exposure to certain kinds of fantasy images or suggestion without awareness can influence mental state or psychosocial adaptation in a meaningful and persistent way.


The New York Times has a great article on how our actions and decisions can be subconsciously 'primed' by the world around us. The article brings some examples based on recent scientific findings:
  • Psychologists at Yale altered people’s judgments of a stranger by handing them a cup of coffee. The study participants, college students, had no idea that their social instincts were being deliberately manipulated. On the way to the laboratory, they had bumped into a laboratory assistant, who was holding textbooks, a clipboard, papers and a cup of hot or iced coffee — and asked for a hand with the cup.

    That was all it took: The students who held a cup of iced coffee rated a hypothetical person they later read about as being much colder, less social and more selfish than did their fellow students, who had momentarily held a cup of hot java.

  • New studies have found that people tidy up more thoroughly when there's a faint tang of cleaning liquid in the air; they become more competitive if there’s a briefcase in sight, or more cooperative if they glimpse words like "dependable" and "support" — all without being aware of the change, or what prompted it.



One of the fascinating (though sometimes disturbing) things about subliminal influence is the possibility to embeded it within sophisticated commercials. In the following video Derren Brown, a british psychological illusionist, demonstrates the use of such subliminal manipulation on advertising experts:



















sources

http://www.nytimes.com/2007/07/31/health/psychology/31subl.html?ei=5090&en=62f9b092a91bc6dc&ex=1343534400&partner=rssuserland&emc=rss&pagewanted=all

http://findarticles.com/p/articles/mi_g2699/is_0006/ai_2699000639/
http://www.mindhacks.com/blog/2007/08/the_modern_science_o.html
http://www.realmagick.com/articles/49/549.html
http://howtheychangeyourmind.blogspot.com/

The hermaneuitic cycle

Last IS class I was introduced to a new term - Hermaneuitic cycle as an infinite way of looking. Whenever you read or observe things you learn something new.

"Hermeneutics," from Greek hermêneuô, "to interpret or translate" (from the messenger of the gods, Hermes), is the theory and practice of interpretation, originally the interpretation of texts, especially religious texts. The "hermeneutic cycle" is the process by which we return to a text, or to the world, and derive a new interpretation -- perhaps a new interpretation every time, or a new one for every interpreter. It is clear that this happens all the time. We can understand a book, a movie, etc. a little differently each time we read or see it.

I found this term as one that can be linked to two other different terms:

The first is – Fractals, or The fractalistic way of nature:
For many years I was fascinated by the fact that there are similarities in the patterns and morphology of totally different things in nature. For example, the Solar system is roughly as the atom structured, and the galaxies are as nautilus. It took a while until I was introduced to what the scientists called - fractals.
A fractal is defined as a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole," a property called self-similarity. Because they appear similar at all levels of magnification, fractals are often considered to be infinitely complex (in informal terms).

Basic fractals can be made by repetitive simple geometric steps. For example:
Start with an equilateral triangle, replaces the middle third of every line segment by create a new equilateral triangle "bump". Keep repeating it to infinity, and you get yourself a fractal (known as Koch snowflake).


















The nautilus is one of the most famous examples of a fractal in nature. The perfect pattern is called a Fibonacci spiral.
Snow flakes are another beautiful example of fractals in nature.









A special type of broccoli, this cruciferous and tasty cousin of the cabbage is a particularly symmetrical fractal.

So the question is, how come that as more as we explore the world we always tend see and find the same general pattern no matter how deep and close or how high and vast we are looking? It seems that there is like a fractlistic charachteristic of nature and no matter how deep you dig you will always come up with more observations and questions… However, it seems that although everything looks similar and may have the same shape and form its actually different in the details and depends on the observer! That’s where the quantum mechanics effect begins.

The second thing is therefore the link to some of the findings from quantum mechanics that described the influence of the observer on the reality.
For almost a century physicists are amazed by the fact that in all the most careful and precise experiments it appear that the mesurement equipment, the fact that someone is observing on the experiment – change the behaviour of the system! therefore influenced the results. Today, most of the scientist and philosophers on quantum theory claim that the actual thing that happened is – that the conscious mind of the scientist influence the behavior of the system!
The best explanation to the subject is by Dr. Quantum (taken from “what the bleep do we know"):
This kind of influence sounds sometimes like relating to mystical or predjiuce phenomena like evil eye… however, the quantum theory is highly accepted today among scientists never failed to predict experiments results (in the atom scale).


In my opinion, the hermeneutic cycle is only one aspect/discovery that religion scholars found out (and philosophers formulated into a theory) that includes more fundamental inner aspects of nature as can be revealed from natural science and quantum physics.



Sources

Backup your soul!

A brain–computer interface (BCI), sometimes called a brain–machine interface, is a direct communication pathway - transferring information - between a brain and an external device. BCIs are often aimed at assisting, augmenting or repairing human cognitive or sensory-motor functions. But today, some scientists are playing with the idea of actually using this technology to keep our memories and experiences even motoric skills in an external hard drive..

Today, the existing technology enable us to connect electrodes directly into the brain (of any mammal) and record the activity of a specific area. Then, from the electrical pulses than has been recorded the experimenters are able to analyze and conclude about the inner brain processes that are taking place in that specific area

These kind of devices became more and more common in the last years and being used for various implications. The first system known as the BrainGate Neural Interface System, consists of an array of electrodes (around a 100) that record neural activity from the motor cortex of the brain. Signals from the implant are decoded and processed by a computer, allowing them to be translated into movement commands.
  • Converting information into speech - Eric Ramsey, 26, has locked-in syndrome, in which people are unable to move a muscle but are fully conscious. Ramsey, who suffered a brain-stem stroke at the age of 16, has an electrode implanted into a brain area that plans the movements of the vocal cords and tongue that underlie speech. The neuroscienctists team that are taking care of him has developed models that predict how neurons in this region fire during speech. They used these predictions to translate the firing patterns of several dozen brain cells in Ramsey’s brain into the acoustical building blocks of speech.
















  • Control the movement of a prosthetic limb - Matthew Nagle, 25-year-old man paralysed from the neck down has a computer-linked implant placed in his brain that enables him to operate devices just by thinking about it. Matthew has learnt how to use a computer open email and control a TV using the power of thought can also move objects with a robotic arm. Watch him on this amazing video:




By using the exact same principle scientists are now using electrodes in order to create the reverse effect: Stimulating the brain by electrical pulses and execute specific operations. This has been extensively studied on animals in the last decades and recently successfuly tested on humans! Here are some of the very famous Examples:
  • Cochlear implants - is an electronic device that is surgically implanted into a deaf person chochlea and provide him a sense of sound. As of April 2009, approximately 188,000 people worldwide(!) had received cochlear implants. The device usually includes a microphone which picks up sound from the environment, a speech processor which selectively filters sound to prioritize audible speech and sends the electrical sound signals through a thin cable to the transmitter, the transmitter transmits the processed sound signals to the internal device which includes an array of up to 22 electrodes wound through the cochlea which send the impulses directly to the auditory nerve.
  • Artificial eyes - sometimes called Bionic eye is an experimental visual device intended to restore functional vision. There are many kind of devices that has been tested in the last years and are rapidly improved. Excellent video:


  • Deep Brain Stimulation -Parkinson desease is one of the most common neurological disorders that has severe effect on the patients everyday life. By inserting electrodes to a certain area in the patients brain and continuesly stimulating it, the patients are able to control their movement capabilities and extremely reduce the tremor and paralysis. The method - Deep brain stimulation - has dramatically changed the lives of many patients with uncontrollable tremors. Patients often can resume normal activities, such as feeding and dressing themselves, and can have active and fulfilling lives. The need for anti-tremor medications is often reduced or eliminated. Animation that shows how it works and an example of patient before and after treatment:














A glance to the future

What about Inserting more complex information directly to our brain? What if we could stimulate the brain and teaching it/us a totaly new things..? just imagine the possobilities of having tons of information directly to your grey matter in no time!! : you could... "Read" tons of books and watch all movies ever made in 1o Mhz.. , you could play piano, violin or drums in less than a n hour.. or acquire all the martial arts skils and be able to do all the Keanu Reeves paalulim tricks in real life! Its all about information isnt it?

One of the most fascinating question I have recently heard was: could we one day be able to "download" specific data/information/knowledge from our brain into a flash drive? and if we could, why not downloading all our memories to a hard drive and by that simulate our brain activity and keep our entire soul eternal?

Futurists says that 2050 immortality will be within our grasp:








Source


http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interfaceRay Kurzweil on how technology will transform us:
http://www.ted.com/talks/ray_kurzweil_on_how_technology_will_transform_us.html http://www.physorg.com/news180620740.html http://edition.cnn.com/2005/TECH/05/23/brain.download/ http://www.guardian.co.uk/science/2005/may/22/theobserver.technology
http://blogs.discovery.com/good_idea/2009/06/downloading-data-directly-into-your-brain.html
http://news.bbc.co.uk/2/hi/science/nature/6368089.stmhttp://blog.taragana.com/science/2009/12/16/paralysed-man-successfully-controls-speech-synthesiser-with-thought-951/
http://www.jumpintotomorrow.com/?p=2821
http://edition.cnn.com/2007/HEALTH/conditions/12/14/locked.in/index.html

Friday, April 23, 2010

Its all in your brain - Some similarities between computerized IS and the human brain



Information System is quite a broad area of study that mainly refers to computer-based technology. When using the term, people are usually refers to the process of acquiring data, storing it and performing various data analysis.

However, even from a first sight, it seems that there are similar lines between computerized information systems and another very complex "information system" - the Human Brain.

Most of the brain researches todays tend too look at the human brain as nothing but a very sophisticated and extremely complicated computer… the computerized ability of the brain is so that until now the largest and fastest computers in the world can not imitate even the 1 years old child brain capability.. The secret is that the brain doesnt contain only one processor but billions of them that works in parallel. The human brain contains about 10 billion nerve cells (10^11). On average, each neuron is connected to other neurons through about 10,000 synapses. In that way the brain's network of neurons forms a massively parallel information storing and processing system. This contrasts with conventional computers, in which a single processor executes a single series of instructions. In the brain each neurons is actually perform as an independent computer hence be able to sore amount of data and able to effectivelty analyze the environment.

In here, I will try to stress out some of the similarities and differences of the charachteristics of and termonilogy that is being used in computerized information systems with that being used in Neuropsycology and Psychophysics. I chose to specifically focus on Memory.


Lets start with the similarities:

In General, computerized information systems includes the following capabilities:

1)Computation - Perform high-speed, high-volume, numerical computation.

So is the human brain.. In terms of the functions, both are used for mathematical calculations, carrying out complex algorithms and to storing of crucial information.

2)Communication - Provide fast, accurate, and inexpensive communication within and between organizations.

The brain submodules provide parallelic computation beetween each module. The human brain contain module/specific area for categorized information and specific analysis processes. For exampla, there are specific areas for producing movements, sensing touch, producing speech or visualizing objects. Since the information about the world is stoered in our memory the memory it has to be stores in a very efficient easy to retrieve way. There are evidence for the brain to codifed and store the ïnformation"in semantic folders.

3)Automation - Automate both semiautomatic business processes and manual tasks.
Riding a bike, driving a car, eating breathing walking.. Most of our daily motoric procedures are unconscious. That means that the required sequences or programs are somehow already coded in the brain.

4) Storage - Store huge amounts of information in an easy-to-access, yet small space.
Storage of information in the brain is done by strenghening synaptic connections, activation of specific areas for short time (Short term memory), or morphological changes in the neurons structures (long term memories). There are evidence that the information is codified in a kind of "semantic folders" that are situated in the same brain structure/area. Famous example is of a category specific deficits following a brain seizure.

5) Access - Allow quick and inexpensive access to vast amount of information, worldwide.
The brain needs only sandwiches and cup of coffee and it works!

6) Knowledge - Facilitate the interpretation of vast amounts of data Enable collaboration anywhere, any time.

7)Competitiveness - Increase the effectiveness and efficiency of people working in groups in one place or in several locations, anywhere.

(Based on lecture 2 PPT)


Some more similarities:

8) Combining components
Both work by combining the processes of several components and parts to perform their tasks.
I.e. A computer consists of many many parts, including a motherboard(which itself would is made up of many parts), the disk drives, the processor, graphic cards and many more ... all of which has its own roles in the computer's processes.
Like a computer, the brain is formed out of parts. Besides having the left and right hemispheres, there are also parts of the brain that take care of emotions, mathematical calculations, body CO-ordinations and many other tasks needed for our daily activities.


9) Eletrical signals
Both work by transmitting "logic signals" to each of their parts. Signals are both electrical
i.e. A computer works by using binary ("on"/"off" logical signals known as "bits", put together as "bytes" to represent data.) To communicate internally between components, represent information and store data. In a way, neurons in the brain are either "on" or "off" by either firing an action potential or not firing an action potential.
The face that the brain operates by using a "simple"electrical current has a in electrical stimulation techniques (see one of my next blogs).

10) Upgrading and Evolution
Both can change with time. (the brain evolves, while the computer upgrades with technological advances)
i.e. When any computer gets outdated, there are always options To upgrade, parts To replace faster newer models To choose from. The brain of modern man is found To be significantly larger than those of 1.7 million years ago.
Newsweek article "The first Wanderers" (22/5/2000 issue): May, 2000 a team of scientists uncovered two 1.7 million year old dmanisi skulls of Homo ergasters, notably smaller with only with 780cc of capacity compared with the 1500cc of modern man's skull capacity; proof of the human brain's evolution)


So.. what are the differences?


Compensation ability -
The first and probably the most importatnt difference in my opinion is the Brain plasticity. The brain, unlike any computer or network, have a miraculous ability to compensate and change his modules activiry after damage. This unique charachteristics - Brains plasticity - is not fully understood and a subject to extensive research.
for example:
http://www.youtube.com/watch?v=TSu9HGnlMV0


In contrast, most programs and engineered systems are brittle: if you remove some arbitrary parts, very likely the whole will cease to function.