International scientific conference "INFORMATION TECHNOLOGIES, NETWORKS AND
TELECOMMUNICATIONS" ITN&T-2022 Urgench, 2022y April 29-30
466
THE BENEFITS OF USING COMPUTER VISION IN MEDICINE
Khujaev Otabek, Rustamov Oybek, Allambayeva Yorqinoy, Sapoyev Sanjar
Urgench branch of the Tashkent University of Information Technologies named
after Muhammad Al-Khwarizmi.
h_q_otash@mail.ru, oybekrustamov@icloud.com,
sapoyev8386@gmail.com
Annotation: Artificial intelligence (AI) in healthcare is going through a hype
period. All developers have a lot of ambitions, ranging from small startups to large
multinational vendors and manufacturers of medical equipment that offer a variety
of algorithms, services, etc. However, the creation of solutions based on artificial
intelligence for medicine and healthcare is not only new opportunities, but also
numerous problems. In this article, we will look at the barriers that need to be
overcome on this path in advance so as not to make typical mistakes, and also
answer the main question that worries absolutely everyone: will AI replace a
doctor in the future?
Keywords:
Artificial
intelligence, medicine, health care, development, problems,
certified
A large number of development teams that have been professionally
engaged in computer vision systems, artificial intelligence and neural networks for
a long time are actively going into the healthcare sector. Sometimes they are
specialists with
absolutely amazing prospects, extensive experience and ready-
made products that are successfully implemented in industry, transport, etc. But
they all come to a completely new area for themselves, which lives by its own laws
and rules. And the main thing in it is that the price for even the slightest mistake
can be a person’s life.
Artificial intelligence (AI) is intelligence
demonstrated by machines, as
opposed to the natural intelligence displayed by animals including humans.
Leading AI textbooks define the field as the study of "intelligent agents": any
system that perceives its environment and takes actions that maximize its chance of
achieving its goals.[1]
The main problem faced by AI developers for medicine is the quality and
standardization of the source data. In most cases, developers take a separate
dataset, most often belonging to one hospital, train an algorithm on it and release
it. If the goal is to get investments, then this approach will allow you to do it. But if
you need to create a product that will work massively in the healthcare system at
home and abroad, then the approach is unacceptable, since the results of AI work
usually turn out to be non-reproducible
on other datasets, diagnostic devices,
another population, etc.
There are often discussions that artificial intelligence does not need to be
certified as a medical device. But at the same time,
if a person plans to treat
himself or a family member with pills, he wants the pills to be tested in clinical
trials, and the instruments used during the operation to be medical products, sterile,
proven and of high quality. And as soon as it comes to artificial intelligence, for
International scientific conference "INFORMATION TECHNOLOGIES, NETWORKS AND
TELECOMMUNICATIONS" ITN&T-2022 Urgench, 2022y April 29-30
467
some reason it is not a medical device and the diagnostic solution that determines
the fate of a person is perceived as a kind of application without any responsibility.
This is a completely vicious practice.
For example, Figure 1 shows a picture in which
Artificial intelligence
detected the presence of cancer on a person’s stomach with an accuracy of 79%.
Figure 1. Detected the presence of cancer on a person’s stomach.
In order to achieve a better result in this area, for example, the detection of
stomach cancer requires more data for training the neural network. This is the main
thing that needs to be paid attention to, since the neural network will work better if,
at the time of training, it is given good quality and high-resolution images in the
input. It is also worth noting that a computer with a good graphics card and a large
amount of RAM is needed for training, since during training the model is created
using a lot of computing power.[2]
For example, for the usual determination of stomach cancer with the help of
artificial intelligence, the model needs to bring 1,000,000 images with good
quality. And in order for the model to be perceived well, a large computing system
with a processor with a clock frequency of at least 3 GHz and the number of
physical cores is 4 or more is needed. You can also use nVidia Cuda, which in
training allows you to use the full power of the
computing power of high-
performance graphics cards.
Do'stlaringiz bilan baham: