Computer science[edit]
|
This section needs expansion. You can help by adding to it. (September 2018)
|
In computer science and in the part of artificial intelligence that deals with algorithms, problem solving includes techniques of algorithms, heuristics and root cause analysis. The amount of resources (e.g. time, memory, energy) required to solve problems is described by computational complexity theory. In more general terms, problem solving is part of a larger process that encompasses problem determination, de-duplication, analysis, diagnosis, repair, and other steps.
Other problem solving tools are linear and nonlinear programming, queuing systems, and simulation.[20]
Much of computer science involves designing completely automatic systems that will later solve some specific problem—systems to accept input data and, in a reasonable amount of time, calculate the correct response or a correct-enough approximation.
In addition, people in computer science spend a surprisingly large amount of human time finding and fixing problems in their programs: Debugging.
Logic[edit]
Formal logic is concerned with such issues as validity, truth, inference, argumentation and proof. In a problem-solving context, it can be used to formally represent a problem as a theorem to be proved, and to represent the knowledge needed to solve the problem as the premises to be used in a proof that the problem has a solution. The use of computers to prove mathematical theorems using formal logic emerged as the field of automated theorem proving in the 1950s. It included the use of heuristic methods designed to simulate human problem solving, as in the Logic Theory Machine, developed by Allen Newell, Herbert A. Simon and J. C. Shaw, as well as algorithmic methods, such as the resolution principle developed by John Alan Robinson.
In addition to its use for finding proofs of mathematical theorems, automated theorem-proving has also been used for program verification in computer science. However, already in 1958, John McCarthy proposed the advice taker, to represent information in formal logic and to derive answers to questions using automated theorem-proving. An important step in this direction was made by Cordell Green in 1969, using a resolution theorem prover for question-answering and for such other applications in artificial intelligence as robot planning.
The resolution theorem-prover used by Cordell Green bore little resemblance to human problem solving methods. In response to criticism of his approach, emanating from researchers at MIT, Robert Kowalski developed logic programming and SLD resolution,[21] which solves problems by problem decomposition. He has advocated logic for both computer and human problem solving[22] and computational logic to improve human thinking[23]
Do'stlaringiz bilan baham: |