SUPER INTELLIGENCE

SUPER INTELLIGENCE

Definition:-



A superintelligence is a hypothetical agent that posses intelligence for surpassing that of the brightest and most gifted human minds.

"superintelligence " may also refer to a property of problem-solving systems(eg. superintelligent language translators or engineering assistants)

whether or not these high-level intellectual competencies are embodied in agents that act in the world.
University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domain of interest"

Actually, A superintelligence could do everything a human mind can do but much faster.

SUPER INTELLIGENCE


ABOUT SUPERINTELLIGENCE:-

                                            AI has been the catchphrase on every futurist's tongue. From chatbots to smart assistants, AI has begun to transform the numerous industry verticals.

                                         if you take a deep dive, you would notice AI has started bettering humans in several tasks. like detecting cancer better than oncologist or translating languages or beating GO( chinese checkers) world champion.


                                        these achievements are setting up the tone for the future. A future where intelligent cyborgs and superhumans are not a part of sci-fi movies.

                                      this is referred to as artificial superintelligence where a machine's cognizance supersedes that of human's. Think of Jarvis or Ultron from marvel movies.

SUPER INTELLIGENCE

HARDALES IN THE WAY OF MAKING A.S.I.


                      AI is a relatively new field in technology, so it has its own roadblocks.


SOLVING COMPLEX PROBLEMS:- 

                            Chatbots have helped in customer engagement tremendously and we have a movie directed by AI and some music composed by AI also.BUT how do you solve bigger and real-life problems such as water pollution, air pollution, global warming?

HUGE AMOUNT OF DATA:-


                                     To train an AI accurately we need a large number of clean data sets. It is not easy to collate large data sets and maintain good quality. The challenge here is to train and deploy algorithms with lesser data.

DATA PROCESSING:-

                               As AI required large data it also needs data processing powers. Building ASI would require much more data than we can imagine. How do we bring impressive processing powers without spending a fortune?
                            Google may have the answer. Recently, they have released TPUS ( tensor processing units), which are made to speed up machine learning tasks, but for now, TPUS are only produced in limited quantities.

WHAT ABOUT ENTIRELY NEW DATA?

                                              Suppose you have trained an algorithm to identify these shapes - "square" "circle" and " line".Now if you ask the algorithm to identify between circle and pentagon then you have to train it again which is a very lengthy process. Such incremental data changes are quite common in real life. Hence adjusting new data is a major concern.
SUPER INTELLIGENCE

FORECAST FOR PREPARING FIRST A.S.I. :-


                                               Some researches believe that superintelligence will likely follow shortly after development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitasking in a way not possible to biological entities. This may give the opportunity to either as a single being or as a new species become much more powerful than humans, and displace them.

                                             Most surveyed AI researches expect machines to eventually be able to rival humans intelligence, though there is little consensus on when this will likely happen'At the 2006 AI @50 conference, 18% of attendees reported expecting machines to able "to simulate bearing and every other aspect of human intelligence".by 2050; 41% of attendees expected this to happen sometime within 2056 and 41% expecting some time after 2056.

                                             In a survey of 100 most cited authors in AI the year by which respondents expected machines "that can carry out most human professions at least, as well as typical human", should nearly 2050.

                                            So we can say that now a days superintelligence is nothing but hypothetic terms but we can say it will become true shortly as our technology is growing rapidly.


DANGERS RELATED TO SUPER INTELLIGENCE:- 

                                     

Learning computers that rapidly become superintelligence may take unforeseen actions or robots might out-compete humanity.
Researchers have argued that, by way of an "intelligence explosion" sometime over the next century, a self-improving AI could become so powerful as to unstoppable by humans.
                                     

 There may be such scenarios like, when we create the first supper intelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so.
                                 

    Eliezer Yudkowsky explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."


                                      This presents the AI control problem. How to build a super-intelligent agent that will aid it's creators while avoiding inadvertently building a superintelligence that will harm its creators


                                     We can say that in the future, AI will become conscious, self-sustainable, and self-learning. we need not babysit it at all.ASI (artificial superintelligence) will help us to eradicate disease poverty, control pollution, find extraterrestrial life and what not. But If the time comes, when it will fight wars, destroy the potentially catastrophic entity, Talking control of the world, undermining others who are far less smart than itself and mending the world to its will. Does this sound familiar to you? We, humans, did this with the entire earth. What are the possibilities that much smarter ASI would do the same to us?


                                   "You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animals' cousins. It's not a perfect analogy, of course, but it is the best archetype we can observe rather than just imagine"
                                                             -------- YUVAL NOAH HARARI

SUPER INTELLIGENCE

THE ENDING:- 

         
                      we need to be more responsible while building ASI. we should fix some security protocols inside ASI. Now the confusion is who will create security protocol we or ASI, as ASI will smarter than us.


                                    we should set a limit for ASI after which ASI will self-destroyed. We should say this a security or safety valve which will safe our man and mankind from risks o over empowerment of ASI.


                                    At last, we should have the supreme control over any technology or AI s .we should not let ASI ruled over us.


                                    Nowadays we can not leave without technology, so technology is required but with a safety valve.

Post a Comment

0 Comments