Artificial Intelligence And Chess

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now

ABSTRACT

This paper is the introduction to Artificial intelligence (AI). Artificial intelligence is exhibited by artificial entity, a system is generally assumed to be a computer. AI systems are now in routine use in economics, medicine, engineering and the military, as well as being built into many common home computer software applications, traditional strategy games like computer chess and other video games. I tried to explain the brief ideas of AI and its application to various fields. It cleared the concept of computational and conventional categories. It includes various advanced systems such as Neural Network, Fuzzy Systems and Evolutionary computation. AI is used in typical problems such as Pattern recognition, Natural language processing and more. This system is working throughout the world as an artificial brain. Intelligence involves mechanisms , and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered “somewhat intelligent”. It is related to the similar task of using computers to understand human intelligence.I can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do. I discussed conditions for considering a machine to be intelligent. If the machine could successfully pretend to be human to a knowledgeable observer then one certainly should consider it intelligent.

INTRODUCTION

Artificial intelligence

Artificial intelligence (AI) is defined as intelligence exhibited by an artificial entity. Such a system is generally assumed to be a computer. Although AI has a strong science fiction connotation, it forms a vital branch of computer science, dealing with intelligent behaviour, learning and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, speech, and facial recognition. As such, it has become a scientific discipline, focused on providing solutions to real life problems. AI systems are now in routine use in economics, medicine, engineering and the military, as well as being built into many common home computer software applications, traditional strategy games like computer chess and other video games.

History

The intellectual roots of AI, and the concept of intelligent machines, may be found in Greek mythology. Intelligent artifacts appear in literature since then, with real mechanical devices actually demonstrating behaviour with some degree of intelligence. After modern computers became available following World War-II, it has become possible to create programs that perform difficult intellectual tasks.

  • 1950 – 1960:-The first working AI programs were written in 1951 to run on the Ferranti Mark I machine of the University of Manchester (UK): a draughts-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.
  • 1960 – 1970 :-During the 1960s and 1970s Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating limits of simple neural nets and Alain Colmerauer developed the Prolog computer language. Ted Shortliffe demonstrated the power of rule-based systems for knowledge representation and inference in medical diagnosis and therapy in what is sometimes called the first expert system. Hans Moravec developed the first computer-controlled vehicle to autonomously negotiate cluttered obstacle courses.
  • 1980’s ONWARDS :-In the 1980s, neural networks became widely used with the back propagation algorithm, first described by Paul John Werbos in 1974. The 1990s marked major achievements in many areas of AI and demonstrations of various applications. Most notably Deep Blue, a chess-playing computer, beat Garry Kasparov in a famous six-game match in 1997.

Categories of AI- AI divides roughly into two schools of thought:

  • Conventional AI.
  • Computational Intelligence (CI).
  • Conventional AI :-Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI).

Methods include:

  • Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them.
  • Case based reasoning
  • Bayesian networks
  • Behavior based AI: a modular method of building AI systems by hand.
  • Computational Intelligence (CI) :- Computational Intelligence involves iterative development or learning (e.g. parameter tuning e.g. in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing.

Methods include:

  • Neural networks: systems with very strong pattern recognition capabilities.
  • Fuzzy systems: techniques for reasoning under uncertainty, has been widely used in modern industrial and consumer product control systems.
  • Evolutionary computation: applies biologically inspired concepts such as populations, mutation and survival of the fittest to generate increasingly better solutions to the problem. These methods most notably divide into evolutionary algorithms (e.g. genetic algorithms) and swarm intelligence (e.g. ant algorithms).

APPLICATIONS OF AI

  • Game Playing :-You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation–looking at hundreds of thousands of positions.
  • Speech Recognition :-In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient.
  • Understanding Natural Language :-Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains.
  • Computer Vision :-The world is composed of three-dimensional objects, but the inputs to the human eye and computer’s TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.
  • Expert Systems :-A “knowledge engineer” interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed.
  • Heuristic Classification :- One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment).

REVIEW OF LITERATURE

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry. Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as: Knowledge, Reasoning, Problem solving, Perception, Learning, Planning, and Ability to manipulate and move objects. Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious approach. Machine learning is another core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory. Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub- problems such as facial, object and gesture recognition. Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

Deep learning while flashy is really just a term to describe certain types of neural networks and related algorithms that consume often very raw input data. They process this data through many layers of nonlinear transformations of the input data in order to calculate a target output. Unsupervised feature extraction is also an area where deep learning excels. Feature extraction is when an algorithm is able to automatically derive or construct meaningful features of the data to be used for further learning, generalization, and understanding. The burden is traditionally on the data scientist or programmer to carry out the feature extraction process in most other machine learning approaches, along with feature selection and engineering. Feature extraction usually involves some amount dimensionality reduction as well, which is reducing the amount of input features and data required to generate meaningful results. This has many benefits, which include simplification, computational and memory power reduction, and so on. Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. clustering, reinforcement learning, and Bayesian networks among others. As we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches. As it turned out, one of the very best application areas for machine learning for many years was computer vision, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters “S-T-O P.” From all those hand-coded classifiers they would develop algorithms to make sense of the image and “learn” to determine whether it was a stop sign. Good, but not mind-bendingly great. Especially on a foggy day when the sign isn’t perfectly visible, or a tree obscures part of it. There’s a reason computer vision and image detection didn’t come close to rivaling humans until very recently, it was too brittle and too prone to error. Time, and the right learning algorithms made all the difference.

CONCLUSION

AI is an extremely powerful and exciting field. It’s only going to become more important and ubiquitous moving forward, and will certainly continue to have very significant impacts on modern society. Artificial neural networks (ANNs) and the more complex deep learning technique are some of the most capable AI tools for solving very complex problems, and will continue to be developed and leveraged in the future. While a terminator-like scenario is unlikely any time soon, the progression of artificial intelligence techniques and applications will certainly be very exciting to watch! AI is an exciting and rewarding discipline. AI is branch of computer science that is concerned with the automation of intelligent behavior. The revised definition of AI is – AI is the study of mechanisms underlying intelligent behavior through the construction and evaluation of artifacts that attempt to enact those mechanisms. So it is concluded that it work as an artificial human brain which have an unbelievable artificial thinking power.

REFERENCE

  • Castrounis, A. (2017). Artificial Intelligence, Deep Learning, and Neural Networks, Explained. [online] Kdnuggets.com. Available at: http://www.kdnuggets.com/2016/10/artificial-intelligence-deep-learning-neural- networks-explained.html [Accessed 28 Sep. 2017].
  • The Official NVIDIA Blog. (2017). The Difference Between AI, Machine Learning, and Deep Learning? | NVIDIA Blog. [online] Available at: https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence- machine-learning-deep-learning-ai/ [Accessed 28 Sep. 2017].
  • Albus, J. S. (2002). ‘4-D/RCS: A Reference Model Architecture for Intelligent Unmanned Ground Vehicles’ (PDF). In Gerhart, G.; Gunderson, R.; Shoemaker, C. Proceedings of the SPIE AeroSense Session on Unmanned Ground Vehicle Technology. 3693. pp. 11–20. Archived from the original (PDF) on 25 July 2004.
  • Brooks, Rodney (1990). ‘Elephants Don’t Play Chess’ (PDF). Robotics and Autonomous Systems. 6: 3–15. doi:10.1016/S0921-8890(05)80025-9. Archived (PDF) from the original on 9 August 2007.
  • Buchanan, Bruce G. (2005). ‘A (Very) Brief History of Artificial Intelligence’ (PDF). AI Magazine: 53–60. Archived from the original (PDF) on 26 September 2007.

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now