OpenAI & Neuralink: Shaping Our AI Future

Machine generated transcript…

Today we ask if comparably simple rules and multi-agent competition can also lead to intelligent behavior in a new virtual world. These agents are playing, hide and seek. These agents have just begun learning, but they’ve already learned to chase and run away. You just saw a demonstration from the company open ai, although the little orange and blue agents are playing a simple game of hide.

And seek this video showcases how artificial intelligence and simulation training can be used for problems with some variability? This is a hard world for a hider who has only learned to flee. However, after training in millions of rounds of hide and seek the hiders find a solution. The hiders learn to use rudimentary tools to their advantage by grabbing and locking these blocks. They can create their own shelter throughout the video agents, learn to play freely by using the tools around

Them weve also put these agents into a more open-ended environment, randomizing the objects, team sizes and walls in this world. They learn to construct their own shelter from scratch, requiring that they arrange multiple objects into precise structures, as you can probably tell the environment is becoming more complex, but with lots of reinforcement, learning and training. The agents have learned to build a shelter that protects them from their threats. If you thought this was fascinating or scary, stay tuned to learn more about what openai is working on and how it compares to neuralink

Hey everyone welcome to nerobot in this episode, well, discuss openai risks and developments of artificial intelligence, elon, musks neurolink and where this could all be headed. Artificial intelligence is a sharp double-edged sword. If things play out well, humanity could be positively changed forever. However, if things unfold the wrong way, humans could get left behind or, i suppose, even obliterated entirely. Their mission is to ensure that artificial general intelligence benefits

All of humanity. In the last couple of months, openai has been in the news because of their progress in code generation with a model called codex they’ve worked with github to launch a new ai tool that generates its own code. Openai is working to create tools for developers using its language, generating algorithm called gpt3. The practical application of this technology is pretty incredible in this picture. Two of openais founders,

Ilya and greg describe how the codex model is used to generate useful code output with some fairly ambiguous directions first. They start with a pretty, simple, command say hello world the model is able to output the result deemed to be. Appropriate and next they follow it up with two more ambiguous commands: already, its pretty cool they’ve been able to make requests and have the model output something useful. They follow up with a command to say it five times and then request now. Instead, do it with a for loop,

as you can tell with this picture theyre providing instructions for the model that reference former instructions as more instructions are provided the variability of the model is increasing this is very impressive i was planning on saying how amazing it is but ilya articulated it much nicer than i could have in this clip it is fundamentally impossible to build such a system except by training a large neural network to do really good code autocomplete that’s all we did it is really simple conceptually though.

perhaps not in practice to just set up a large neural network which is a large digital brain which has a mathematically sound learning procedure and that part can be understood and it is relatively simple and then you make it work you make the neural network big you train it on code autocomplete and by being good enough at code autocomplete we get the capabilities that you see here it.

actually reads all the letters all the words that we are giving it it chews and digests them inside of its neural activations inside of its neurons and then it emits the code that we see and because the autocomplete is so accurate the code actually runs and it runs correctly they also showcase more functionality with the codex model by creating an actual web page yes um so lets actually take a look so.

we have web server running on port 8000 so well take a look and there we go hello world with empathy oh the next two instructions they provide are one make a webpage that says our message and save it to a file and two start a python web server to serve that page greg then goes on to state that this is where he believes codex really shines its fantastic that this tool can be used in conjunction with other humans i.

might add to quickly develop programs the model enables humans to focus on spending their energy on the difficult cognitive tasks rather than the tedious tasks you know first of all i do want to point out that this particular example of writing a python web server is something ive done a dozen two dozen times and i still never remember how to do it because between python 2 and python 3 the exact like structure of the modules changed.

that you have to like create this handler object you pass it to a tcp server that you pass the address here and a port and oh yeah your address could be an empty string if you want and then you do serve forever and this its complicated and this kind of stuff is not the fun part of programming right the fun part of programming you know id say programming is kind of two things.

one is understand the problem and that includes talking to your users that includes thinking super hard about it and decomposing into smaller pieces this is the like really cognitive aspects of building something and then there’s a second piece which is map a small piece of functionality to code right whether its an existing library an existing function whether its in your own codebase or out there in the world and that second part is where this model.

really shines like i think its better than i am at it because it really has seen the whole universe of how people use code you should think of it as a model that’s you know gpt was trained on all the text out there this models been trained on all the text and all the public code so it really i think accelerates me as a programmer and takes away the boring.

stuff so i can focus on the fun ones in addition to codex openai just released more progress on a different model that quickly summarizes books this model works by dividing the original text into sections and then summarizing each of those sections the summarizing process continues until a complete summary is achieved in alignment with their core mission openai states as we train our models to do increasingly complex tasks making informed evaluations of the models.

outputs will become increasingly difficult for humans this makes it harder to detect subtle problems in model outputs that could lead to negative consequences when these models are deployed therefore we want our ability to evaluate our models to increase as their capabilities increase they continue by saying we are researching better ways to assist humans in evaluating model behavior with the goal of finding techniques that scale to lining artificial general intelligence hopefully other companies and organizations will also be thinking about how artificial general.

intelligence can be used for good because if it gets in the hands of the wrong group things could turn out pretty poorly around 2015 we heard claims from high profile scientists and engineers like stephen hawking elon musk and others discussing the dangers of ai they stated that ai could present an existential risk to humans in the near future the awareness of this threat grew during the same year that elon was asked to.

speak at mits aero astro centennial symposium about entering the field of ai in view of its potential to to be possibly the biggest game changer ever do you have any plans to enter the field of artificial intelligence and in general what are your thoughts on it do you think its even close to being ready for prime time i think we should be very careful about artificial intelligence um if i were to.

guess at what our biggest existential threat is its probably that so we need to be very careful with artificial intelligence im increasingly inclined to think that there should be some uh regulatory oversight uh at the end maybe at the national and international level just to make sure that we dont do something very foolish i mean with artificial intelligence we are summoning the demon elon musk was also an early investor of deepmind the company focused on building.

safe artificial intelligence systems that eventually got acquired by google in 2014 for 600 million dollars eulen specifically said that he did not invest in the company as an investment but as a way to oversee their development elon has been forthright about the existential risks related to ai in countless interviews and pointed out how badly we as humans are at predicting things especially danger in the long term ai in his own words seems to be accelerating in scope and scale he.

believes that ai will be incredibly sophisticated in 20 years but in our current situation its impossible to grasp the full picture because were at the beginning of an exponential curve of improvement the test he uses to see if things in ai are accelerating is if things arrive sooner than expected things are actually accelerating which seems to be the case when asked about some things that he.

would really get excited about within the coming years he said well see some cyborg activity manifesting itself in the form of brain computer interfaces he added that he thinks that the development of ai will occur alongside the development of bcis this clip is from an interview in 2015 i think well probably start seeing like more like truly cyborg activity like human brain interference like brain computer interfaces okay um like there’s alongside the ais.

that are purely athletic yeah i think so we can clearly see a growing interest in ai for example if we look at the number of times that ai has been mentioned in all the books published from 1900 until today we see a semi-exponential growing interest in the subject especially when you zoom into the past 10 years earlier in 2015 there was a meeting called the feature of ai opportunities and challenges arranged by the future of life institute where more than 200 ai experts including elon gathered in january in puerto rico to discuss the.

subject they conducted a think tank of potentially dangerous outcomes and corresponding ways to mitigate the risks surrounding ai after that conference musk donated 10 million dollars to the future of life institute as a way to ensure that people consider the threat of ai he also believed it might be good for government to follow developments related to the technology and regulate it for the safety of the public just a couple of months after that conference musk hawking and apple.

co-founder steve wozniak and others who co-authored the standard textbook on ai along with a thousand other prominent figures signed a letter calling for a ban on offensive autonomous weapons industry leaders also signed the partnership of ai in late 2016 led by a group of ai researchers representing six of the worlds largest technology companies apple amazon google and deepmind facebook ibm and microsoft the main purpose of the partnership was to create a coalition committed to the responsible use of ai develop and share best practices and mainly raise awareness about the.

technology itself elon shared some of the possible benefits of developing ai he thought of creating openai as a non-profit to ensure the benefits of ai outweigh the downsides here we want to highlight the potential beneficial scenarios originally envisioned by elon before openai so when you call that artificial intelligence double-edged sword can you talk a bit about the positive edge first what do you see as the greatest benefits we can get from ai well the greatest benefits from ai would probably be uh in eliminating uh drudgery so like in terms of.

or tasks that that are that are mentally boring um not not interesting there’s arguably breakthroughs in areas that are currently beyond human intelligence or at least for now beyond human intelligence i think we could probably solve them in the long term uh such as um you know the classic sort of curing cancer and um addressing diseases of aging alzheimers and all these things so there’s insert you know various like intractable intractable problems to human intelligence currently what seems to be intractable problems and then.

if you had something that was waste water it could solve those problems these are some of the hints elon has dropped for why opening eyes should exist in the first place then fast forward to september 2016 which is six months after the founding of openai to this interview between sam altman and elon musk both of whom are founders of openai so i think we must.

have democratization of ai technology and make it widely available and that’s you know the reason that obviously uh you mean the rest of the team you know created openai um was to help uh with the democrats help helps spread out ai technology so it doesnt get concentrated in the hands of a few and but then of course that needs to be combined with.

solving the high bandwidth interface to the cortex open ai was founded in december 15 by elon musk sam meltman yuya sitsgiver greg brockman and others there are also many high-profile investors who collectively pledged more than one billion dollars to pursue the democratization and safe development of general ai openai was established as a 501c3 nonprofit and started its activities primarily as an artificial intelligence lab immediately producing impressive results.

such as gym a platform for reinforcement learning to compete with deepmind the research has been wide-ranging including teaching computers to control robots with few instructions known as one-shot learning in the creation of ai agents to play popular video games such as dota camera but how do you think open ai is going as a six month old company i teach you guys pretty well i think weve got a really talented group with opening eye and yeah really really talented team and theyre working hard um open a is structured as uh see a.

51c3 non-profit um but you know many non-profits uh do not have a sense of urgency its fine they dont have to have a sense of urgency um but open ai does um because i think people really believe in the mission i think its important um and its its about minimizing the risk of existential harm in the future and so i think its going well im pretty impressed with what people are.

doing and the talent level and obviously were always looking for great people to join in a change of events around two years after that elon resigned from the board in february 2018 but remained a donor elon musk is one of the co-founders but left the board that’s great last year so how involved is elon so elons no longer involved he had to leave due to a conflict with tesla.

he also recruited andre carpathi one of the leading scientists at openai to become director of ai at tesla in june 2017 additionally siobhan zealous is a current board member of open ai shes director of operations and special projects at neuralink elon committed to starting neuralink in july 2016 given its importance despite that his plate was already pretty full with tesla and spacex as we heard in some of the earlier clips the development of a brain computer interface or an additional tertiary.

layer could help solve the bandwidth issue for communicating between humans and computers in the next clip elon elaborates upon why our human brains are limited and why its important to upgrade ourselves to keep up with the development of ai limitation is one of bandwidth so were bandwidth constraint particularly on output so our input is much better but our output is extremely slow if you want to be generous you could say maybe its a few hundred bits per second or a kilobit or something like that output a bit you know the way we we output is.

like we have little meat sticks that we move very slowly and and push buttons or tap tap a little screen uh and and that’s just extremely slow um and you know compare that to a computer which can communicate at the terabit level there’s a very big orders magnitude differences but our input is much better because of vision but even that could be enhanced.

significantly i think i think that the the two things that are needed for mo for a good future that we would look at and conclude is good most likely is we have to solve that bandwidth constraint um with it with a direct neural interface i think a high bandwidth interface to the cortex or more eloquently put we have to either merge with ai or be left behind i think its incredibly important that.

ai not be other it must be us and i could be wrong about what im saying im certainly open to ideas if anybody can suggest a path that’s better but i think were really going to have to either merge with ai or be left behind openai and neuralink differ a lot in their approach but are pursuing similar goals either merge with machines or run the.

risk of some powerful group developing artificial general intelligence to a level that cant be controlled one way of framing the problem is how do you contain the technology that may outsmart you in this clip joe rogan asks elon do you think that well merge somehow with this technology or do you think itll replace us heres elons response the the merge scenario with ai is the one that seems.

like probably the best like for us yes like if you if you cant beat it join it artificial intelligence is still in the early innings and whether its stephen hawking mark cuban or elon musk stating the impact that ai can and will have most of society doesnt seem too aware about whats coming next stephen hawking said the development of full artificial intelligence could spell.

the end of the human race mark cuban continues to believe the companies that have harnessed ai the best are the companies dominating and remember that clip earlier in the episode when greg of openai said that eon had to step away because of a conflict of interest with tesla tesla recently showcased some of the things theyre working on including a humanoid robot so between neuralink and tesla its pretty clear that hes committed to.

creating a useful safe ai future one of the moves that signified the potential of ai is shown by an article released in march 2019 by techcrunch the article is titled openai shifts from nonprofit to capped profit to attract capital this move capped investor upside at a hundred times their initial investments although it may seem like that’s hardly a ceiling well-developed general artificial intelligence could easily have near binary financial outcomes in other words if a company like openai.

were to develop agi well the upside could be much larger than 100 times from now until that pending future outcome there’s still uncertainty however here is a closing thought from openai co-founder and cto greg brockman where he talks about the companys bright future ahead and so i think that were in a similar sort of place here where its hard to predict what the future will be like because were in this this exponential right now where the computational power that were using.

is growing five times faster than moores law and so what we do know is every year were going to have unprecedented ai technologies weve been doing that for seven years opening has been doing it for three and so i think that this year we have systems that can understand and generate text i think five years from now we should expect that we can have systems that you can really have meaningful conversations with i think that we should see within a.

bunch of different domains a lot of very you know systems that can work with humans to augment what they can do much further than anything we can imagine today openai has made huge progress in the field of ai with codex gpt-3 and other projects were very excited to see more developments in the near future and well cover many of them as they unfold.

on our channel hope you enjoyed this episode.

Brief summary of this episode:

– (1:35) OpenAI: AI Beneficial for All
– (7:15) AI as an existential risk
– (11:47) Safe AI development
– (16:16) Neuralink – “If you can’t beat ’em, join ’em.”
– (19:04) The Future of AI

Join this channel to get access to perks:
https://www.youtube.com/channel/UCDukC60SYLlPwdU9CWPGx9Q/join

Neura Pod is a series covering topics related to Neuralink, Inc. Topics such as brain-machine interfaces, brain injuries, and artificial intelligence will be explored. Host Ryan Tanaka synthesizes information, shares opinions, and conducts interviews to easily learn about Neuralink and its future.

Most people aren’t aware of what the company does, or how it does it. If you know other people who are curious about what Neuralink is doing, this is a nice summary episode to share. Tesla, SpaceX, and the Boring Company are going to have to get used to their newest sibling. Neuralink is going to change how humans think, act, learn, and share information.

Neura Pod:
– Twitter: https://twitter.com/NeuraPod
– Patreon: https://www.patreon.com/neurapod
– Medium: https://neurapod.medium.com/
– Spotify: https://open.spotify.com/show/2hqdVrReOGD6SZQ4uKuz7c
– Instagram: https://www.instagram.com/NeuraPodcast
– Facebook: https://www.facebook.com/NeuraPod
– Tiktok: https://www.tiktok.com/@neurapod

Opinions are my own. Neura Pod receives no compensation from Neuralink and has no formal affiliations with the company. I own Tesla stock and/or derivatives.

Edited by: Omar Olivares
#Neuralink #NeuraPod


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *