AZreferate - Referate und hausaufgaben fur schule.
Referatesuche, Hausarbeiten und Seminararbeiten Kostenlose Online-Dokumente mit Bildern, Formeln und Grafiken. Referate, Facharbeiten, Hausarbeiten und Seminararbeiten findest für Ihre einfache Hausarbeiten.



BetriebstechnikBiographienBiologieChemieDeutschDigitaltechnik
ElectronicaEpochenFertigungstechnikGemeinschaftskundeGeographieGeschichte
InformatikKulturKunstLiteraturManagementMathematik
MedizinNachrichtentechnikPhilosophiePhysikPolitikProjekt
PsychologieRechtSonstigeSportTechnikWirtschaftskunde

Referat Artificial Intelligence - The logicists, The connectionists

englisch referate

englisch referate

Artificial Intelligence:

Today there are two main groups in the ai research.

The ones are the logicists and the others are the connectionists. The logicists are the traditional ai scientists who try to create ai on computers based at the 'von Neumann model'.

The connectionists try to build ai by watching and recreating the human brain.

The logicists:

The History:   One of the first people who thought about

artificial intelligence was Gottfried Leibnitz a 17'th century mathematican, who fantasised about logical techniques.

At this time most scientists tried to find an ultimate reasoning mechanism.

In the 50's the most scientists recognised that there couldn't be a general theory of intelligence.

In the late 60's Joseph Weitzenbaum created a programm called 'eliza' . it was the first automatet psychiatrist.

in the mid 70's the stanford resarch center build a roboter called 'shakey' with it's ai programm it was barely able to navigate an empty corridor. But it was a flop.

Definition and structure:

The most common ai programms are 'expert systems' ,this programms are desingned to capture human expertise and try to use it to find the best logical solution.

In expert systems the programmers try to codify

the  principls of valid reasoning in form of mathematical equations, so that the ai program sees the life as a book of rules it has to obey.

Most expert systems use a piece of software called 'inference engine'. This small program

is able to apply the rules programed in the system to the information that was fed into the system. The software works as long as it has rules, when there arn't any rules left to apply then the systen has to make it's decision.

An example: if the season is winter and a tree is green the system comes to the conclusion that the tree has to be evergreen, and if the tree has a specific shape the system knows what sort of tree it might be.

But what if the tree grows in tropical climate and the system has no rule for such a circumstance.

And that is one of the biggest problems the logic orientated ai has. An expert system has to be right the first time it makes a decision because it's unable to retract it.

The system is also unable to solve a problem that is not in it's 'big book of rules for sucsessful  and satisfied living'.

That is one of the reasons why many scientists today believe that the final solution lies somewhere between the logical and the connectional ai systems.


Today's usage and future:

Today ai can be used for nearly everything. For example the ai programm that was used for the roboter shakey (mentiond above) has been the grandfather for some organization and planning programms that where used in the gulf war.

These programms helped the United States to coordinate and organize their troop movements.

American Airlines also uses such an Ai program to help  to find the best flying strategy so that they can keep up with rival airlines.

Matsushita also uses Ai programms in their new camcorders to cut out the jitter that is caused by shakey hands. The programm eliminates every unexpectable or to quick motion so that only clean and smooth motions stay on the film.

Another big company that uses expert systems is American Express.Because about 60% of the transactations American Express has to handle with are so common that an Ai program can do them, and the rest of 40% are done by the ordinary personal. 

In the future the Ai has great chances to advance to the most common computer programs that we will be using in our every day life. But we have to learn a lot if we want to use them correct and efficient.

The probably best would be if we clould learn to combine our human sense and the machine's memory and logical skills. But this is harder to do than to say because it's already hard for us to share power with other humans and how would some of us react if a computer program tells them what they should do or not do. It' a hard process , but we will have to learn to make decisions in harmony with the computers.


The connectionists:

The beginning:           In 1908 an Italian scientist named Cajal found

out that the neurons where the basic pieces of our brain. Then in 1914 Adrian saw that these neutrons and the other nerves are using litle electric impulses (about 40mV) to communicate with each other. Many decades later in 1964 the american scientist Eccles wrote a book about Synapses an the electrical circuits. A synapse is like a small switch that uses chemical substances to give an electric impulse from on nerve to another. With the help of the chemical substances our body can controll how strong such an impulse transfered to the other nerve.

In 1943 Mc Culloch & Pitts two american scientists wrote a work about neuronal nets and how they could be created artificial.These two where the first who thougt about such nets and with their book the connectionistic science has started. 1958 Rosenblatt (USA) built his legendary 'Perceptron' it was the first neuronal net, but more about that later. After Rosenblatt many scientists tried to analyse his work. The most important where Lettvin (1959) and Hubel & Wiesel (1962). Their book about receptive fields started the mathematical investigations of the neuronal nets. Eleven jears after Rosenblatt the two scientists Minsky  & Pappert from the Massecusetts Institute for Technology (MIT) had nearly killed the connectionistic science because they proved that a retina perceptron could not recognize every patter. (In their time they where right, but today we know that a peceptron could recognise every patter if it only has enough units).

But some european scientists didn't give up, the two most important are Tuevo Kohonen (Finnland) and Eduardo Cainello (Italia).


Neurons and Units:

The smalest parts of neuronal nets today are units. Units work very equal to the neurons in the human brain.

As seen above a human neuron looks like cuttlefish. It has fixed connections to other neuron's the dentrides and it has influenceable connections the nerves with their synapses.

The dentrides are like wires between between two CPU's so that the electric Impulses can't be influenced on their way to the neuron.

In the synapses the electric Impulses are converted into chemical substances, when they reach the präsynaptic membrane.

These substances have to go through the synaptic gap.

When they reach the postsynaptic membrane they are being reconverted into electric signals.

Now our body is able to control how much of the substances an electic impulse of a defind strengh can create. So the body can modify the weights between the neurons and that's the way we are able to learn.

The now following basic parts of modern neuronal nets the units are quite equal to the neurons.

A unit has three main parts :


1.) the input here all input signals are summed up and converted to the netto input.

2.) then it has an activating function.

the activatig function says if the netto input is high enough to create an output signal. the three most common activating functions are shown above (Pic. 2 to 4). Picture 2 shows a sigmoide activating function.

Picture 3 a linear and picture 4 a jump function.

3.) the output function creates an output signal if the activating function tells it to to so.

like the brains neurons the units are connected with each other. They are called weights, these weights can be positiv or negative. Positive means that they activate the next unit. Negative means that they inhibite the next unit.

how can they learn?:

In every neuronal net the knowlege is saved in the weights between the units, so if you want a neuronal net to learn you have to change the weights and if you want to save a lot of knowlege you need at least ten million units (the human brain has about 4 billion neurons).

If you want to change the weighs you have to use learning rules.

Some of the most common rules: 1.) The Hebbrule says if the units a and b where aktivated repeated very strong then you should increase the weigths between them. :


2.) The Deltarule renders the difference between the wanted output and the real output and creates so the amount how much the weight should be changed. :

3.) The Backpropagation is the Deltarule for nets with hidden layers.:

it renders the faults back from one layer to the former one.



4.) competitve learning: The units in one layer have negativ weigts to the others in that layer so that only the strongest suvives an remains active:

Some examples:

today there are quite a lot neuronal nets in use. Sejnowski and Rosenberg build "Nettalk" which is able to recognise and pronounce about 1000 english words with a probability thats it's right of 93%. It's also able to pronounce 78% of any english word it has never seen before. It consists of seven inputlayers each with 29 inputunits, then it has one hidden layer with 80 units and one outputlayer with 26 units. It identyfies a word by checking 7 letters. Each of the 26 outputunits stands for one english phoneme. The net has been trained with the backpropagtion learning rule.

The second one is the Perceptron. It has a few sensorunits(=input), 4 or 5 associationunits and 3 or 4 responseunits(=output). In the perceptron the user is only able to change the weights between the associationunits and the responsunits. The problem the percepron had, was that it had not enough units, so that it wasn't able to recognise every inputpattern.

Future and problems:

Today the realization of neuronal nets is a big problem because it's very hard to connect more than 100 units. If you try to simulate them on normal computers you have to write a programming speech first. It's also very hard to programm the learning rules in the form you want to use them.

Maybe in the future when new laser based processors are on the marked we could be able to use the big advanteges of the neuronal nets. With them it's very easy to make parallel distributed processes and content adressed memory acess.





Referate über:


Datenschutz




Copyright © 2024 - Alle Rechte vorbehalten
AZreferate.com
Verwenden sie diese referate ihre eigene arbeit zu schaffen. Kopieren oder herunterladen nicht einfach diese
# Hauptseite # Kontact / Impressum