260SmithWatt 70Neumann 50F.Abed , AI20s.com Fei-Fei Li, Zbee

HumansAI.com NormanMacrae.net AIGames.solar EconomistDiary.com Abedmooc.com

ai celebrating greatest (ie most good for 8 bn beings) human intelligences (1950-2030)

Happy 2024 from AIGames year 1 of intelligence as everyone's game -as yet we recommend attending to next brain-leaps by Li, Lecun and Hassabis first; (as well as sectir checking) we hope by xmas 2025 to offer 361 mosaic of brains to celebrate personal education agency with

11/21 breaking - clara shih ceo salesforce ai interview of Fei-Fei Li

breaking nov- uk producer of bletchley ai summit mp donelan- week in dc l ... start of ai world series  -also AI-secret-life-Rowan-Atkinson.docx & other UK AI Greats

- some big turning points in fei-fei li world of intelligence clarified nov 2023 book worlds i see p100

also of huge interest, nov 2023,  bill gates has come out and says he now sees ai will offer personalised learning agency to every 21st c 2020s+ student - there are about 20 (societal not tech) blocks  this happening that american education system must be transparent about (eg many of these system failings go back decades) having distracted from since dad and my book 2025 report (written 1983 - 33rd year of the economist as student of von neumann -making the case that transformation of education would determine whether 8 billion humans unite beyond extinction chris.macrae@yahoo.co.uk - more at our series of linkedin articles ed3envoyun eg 1

 AIgamesbookpages1-2.docx  Notation - years like 1984 (jobs, gates, imagining leaps web 1-2-3) denote an intelligence crossroads that needs worldwide recursive attention opposite to assuming that the way ahead raced into was goof enough for societal democracy let alone legislative good; Indeed, as early as 1976 The Economist's survey Entrepreneurial Revolution  concluded from 1951=1975 as) the first quarter of a century of post-industrial revolution that political parties segmentation of left (labor) versus right  were no longer fit for smart democracy system design

sometimes many years of work/learning had preceded this; we'd like to hear from you particularly if you think an intelligence date-stamped eg jobs84, gates 84 should be denoted by an earlier year breakthrough; one exception in the case of neumann einstein turing we start at '51 their co-creation of brainworkers' engine type 6 (we understand they had all made heroic innovations for allies to win the wat and in einstein's cases had made energy and other engineering/science breakthroughs from early 20s; but it was 1951 that neumann widely briefed media including dad Economist's Norman Macrae etc on brainworking engines ; for our purposes the un and through its sister ITU agencu=y of all telecoms companies launched 3g in 2001; and 4g in 2009 (- both multiplied data transmission by at least 100 fold but 4g also wifi's data up and down from every gps in satellite range); seeing 5g futures would risk locally deep controverses such as whether  climate brainworking cooperation was  actionably intended by the 193 nations declaration 2015 of 17 sdgs - 

The Net (Neumann-Einstein-Turing) had started machines for brainworkers by 1951- as well as hardware their fav language model was Neural Networks -see neumann's last notes Computer & Brain)- BARD SAYS 

The concept of neural networks was first introduced in the 1940s by Warren McCulloch and Walter Pitts.

McCulloch was a neurophysiologist and Pitts was a mathematician. They were interested in developing a mathematical model of the brain that could be used to simulate human intelligence.

McCulloch and Pitts developed a simple model of a neuron that consisted of a weighted sum of inputs and a threshold function. The neuron would fire if the weighted sum of inputs was greater than the threshold function.

oddly any valuetrue sense of man-made intelligence revolutions as being by and for the peoples did not massively value NN again until 2009 in spite of Yan Lecun 1980s contributions

in 2009 two new phds on neuroscience foresaw likelihood that computer visioning brilliance depend on huge dataset assembly and training - fei-fei li dared the academic work to build gigantic imagenet and make it pivotal to a decade of annual progress competitions; hassabis and fei-fei li celebrated each other's persistence- today leading science ai connects around hassabis (Alphafold2 open sources 20000 proteins); the first proof of deep learning object recognition  came at 2012 entry to imagent from canadian students of Hinton

hence we recommend you know contexts shaped by

Li Fei Fei 1

Hassabis Demis  -co-founder Shane Legg

Hinton Geoffrey  or Bengio Yoshua 1

1 Lecun Yann 1  Convolution Net   arguably senior ny ac to see - currently in nyu with facebook, while courant institure supports deep mind google and womens engineers supported by ms t2 brooklyn

Ng Andrew- see deep learning ai short courses   

& koller Daphne; things briadened a lot with nlp/chat models but eg star with Aidan Gomez

p175 of fei-fei li book confirms first deep code stanford people liked ng, koller daphne. and trun sebastien - then the 3 became 4 with DAlly Bill chair of comp sciemce asking do you want to brim your lab to stanford

(interesting corporate - google trio pichai, brin page all stanford alumni, ceos nvidia and AI2 (Alan institute 2 is on ai alongside first institute on bio-health)  - wolfram

msT1 brooklyn - first sponsor of neuroscience schools 1 stanford, yale, elsewhere; owner ny liberty brooklyn basketball (2023 2nd in 2023 wbna to aja wilson la aces); governor barclay centre; 50 million dollar philanthropy of colored social networks; with husband supports many ny arts nets; on stanford trustee board; ny asia society laureates- its vital un has ny suburb demonstrating sdgs ai to east coast usa; ...




Interpersonally I dont see humans being sustainable by 2030 or ever without clearly seeing what system maps Neumann-Einstein-Turing connected for humanity. Most of the mathematical lives' times of the NET was gravitated by the experiences of being 2 immigrants to princeton from Europe's epicentre of war ; they won the science races integral to supporting the allies in winning what had generated the exponentially terrible mess of how inequitable the furst 5 engines tyoes had been applied by a few dominant empires who carved up nature's resources and classes of people (eg ranging from free to slaves to genocided against).

From the end of world war2 neumann led the agenda that the net needed to give the world machine type 6 - intelligence machiines for brainworking (see his final lectures: computer & the brain. Tragically none of the net had more than 12 years of life to plant this post-industrial revolution

The worlds they foresaw were about far more than the hardware of computers or binary coding . How did the humai brain sense the world- with sight occupying the miot central caoacity of the senses. How did languages code the whole deep diversirt oif growing up with natures in different hemisphers let alone the emergence of data beamed uo and down from GPS. As a transparency note,I may be biassed as with The Economist (and the scots system philosophy of moral sentiments economics starting with smith 1758 - his 2 main books being the last decsription of man-made markets and natural focres before engines; and a 16 yera review of howe place strateguies chnaged as engineers sperad innovation) my family became the biographer of neumann's innocation agendas for humanity.

These notes from bard on the linguistic ans social differences of The Net also provide insights on how their work's intersctions helped to bust almost every academic silo of so-called scientific man. Noite in partular chaos theiry which guages risks od science itself bing a fal approximation vis a vis interconnectedness of systems on natures overall platforms earth and space.

Albert Einstein and John von Neumann were both multilingual. Einstein spoke German, French, English, and Italian. Von Neumann spoke Hungarian, German, English, French, and Greek.

Einstein was born into a German-speaking Jewish family in Ulm, Germany. He learned to speak German as his first language. He also learned to speak English and French as a child. He was fluent in all three languages by the time he reached adulthood.

Von Neumann was born into a Hungarian-Jewish family in Budapest, Hungary. He learned to speak Hungarian and German as a child. He also learned to speak English, French, and Greek as a teenager. He was fluent in all five languages by the time he reached adulthood.

It is not clear whether Einstein and von Neumann learned their second languages at home or at school. However, it is clear that they were both exposed to multiple languages from a young age. This exposure likely helped them to develop a strong ability for language learning.

Multilingualism is a valuable skill. It can help people to communicate with others from different cultures, to access information and resources in different languages, and to learn about different cultures. Einstein and von Neumann were both brilliant scientists, and their multilingualism likely played a role in their success. Turing only spoke English fluently. There is no record of him speaking any other language fluently, although he did learn some French and German during his school years.

Turing was a brilliant mathematician and computer scientist, and his work laid the foundation for the modern field of artificial intelligence. He was also a very private person, and he did not socialize much. This may have limited his opportunities to practice speaking other languages

if you want to help un "AI the sdgs now that's possible with LLMediation) then our tic tac toe game board blends womens AI Abed Fazle, Jobs Steve (from 2001 inspiring valley to rethink whether it had human dev purpose) Gates Melinda, Chan Priscila -the best intelligence books for girls and everyone through school year 23-24

youth ai adds Stanfords Ng Andrew, Yang Jerry, and Brooklyn's Ms T&T

health foundation model & govv2.0 intels add KimJY &   Gates Bill together with ,  Guterres videoAntonio (1 2) &, Sheika Moza (please note we are taking a primarily western view in this thread; we'd start with eg jack ma if  free .to be a worldwide youth mentor not the west's political pawn..)

the ceos who world needs to demand best not shortest AI gains from appear to be Nvidia, Ai2, Google (Pichai Brin Page)

THEY KNEW https://www.c-span.org/video/?447599-1/artificial-intelligence - brockman greg co-founder open ai testified with fei=fei li congress june 2018 that trillion times more tech compute would be put into chats etc by 2028 changing education and everything imaginable https://www.c-span.org/video/?447599-1/artificial-intelligence- compare this with neumann/economist survey The Economist 1951-1984 which hypothesised trillion time more would take 60 years; we updated this from 1984 in 2025 reporting genre but trillion times? no wonder we are facing what hinton/li call 2 buckets - catatrophes eg climate if it went wring might wipe out a billion with each meta-disater, extinction- the us political scene has one engineering doctorate among ist top 500 people - they probably cannot see any deep human future unless movements demand they put some magic spex on

Jacobs is a canadian whose radical ventures fund is filling space now hinton has retired to london with canadian venture park alongside U of Toronto - ss his hosting of the key reiew of how hinton and li see the build up to today's unprecdented yerr of AI attention

marcus gary help fei-fei li publish first booklet demonstrating what national ai advisory bodies to gov could look like

below please find our alphabetic catalogue

Abed epicentre of women empowerment intel over 50 years linking up to billion poorest asian mothers networks - for quarter century without electricity:Steve Jobs hosted silicon valley 65th birthday wish party abed 2001; neither silicon valley or death of women empowerment partnerships have been same since abedmooc.com

ackoff- personally i find ackoff's simple/behavioral checklist of broken system exponentail risks essential to factor into civil debates from 1984 get go of personal intel networking rising exponential - eg he defines a broken system as one where the harder the systems historic professional/experts try the more chaos/conflict they cause- when you consider future history legacy of 1950s NE (Neumann-Einstein-Turing) it was possible to imagineer a stage perhaps 4g telecoms and brainworking engines where designing 193 nations unity was sufficient to unite 8 billion brainworkers; in a sense those behaving as historically certified public servants within a nation were likely to multiply conflicts; the economist foresaw need to proactively debate this in 25th year mediating the net's legacy where survey EntrepreneurialRevolution.city introduced 2 core ideas - historic politicking between left (labor) and right  will become meaningless in terms of renewing generations and then extinction-risking oroblem; next caoaitalism  (including efinance, egov, e-health -any deep society space and data mapping) needs to empower local community resliency at every family-building gps not just global or 193 national policy makers big media budgets; architecurally digital mediation ; intergenerational investment will need changes such as green-scale-actions not just greenwashing 

Alrman Russ hai video 31 oct Professor of Bioengineering, of Genetics, of Medicine (General Medical Discipline), of Biomedical Data Science, and, by courtesy, of Computer Science, Stanford University; Associate Director, Stanford Institute for Human-Centered Artificial Intelligence

Anderson Ray. arguably Ray Anderson benchmarked more than any turn of millennium fortune 5000 industrial sector ceo to show how a sector can profitably lead going green- but it took a decade long model and redesigned value chain architecture of suppliers towards circularity- Ray did this for the sector of carpet tiles gravitated out of his home state of Georgia- click to linkin to practising Ray's Intelligence

Andreessen serial goodtech entrepreneur/venturer in Valley since 1993 -fascinating 2023 dialogue with Condoleezza Rice

Attenborough David, arguably attenborough did more than any media influencer to argue nature's case as primary evolutionary force on earth; part of foreseeing consequeces of the NETS gift has been imagineering how transforming to a global village world means that humans are for first time playing real-time on nature's scale; if we irrersibly damage her system desins all civilisation will collapse simultaneously instead of hostory's separed collapses. our 1984 book 2025 report argued the bbc and all public briadcasrers wound have unique role to ttansition millennials life -sadly in spite of Attenborough's effors the totality of the bbc's purpose never valued this unique opporty=unity to celebrate queens english inettligence mapmakers, Attenborouh's brother Richard ditected the film Gandhi and supported UK Gangian intelligence associations in line with Einstein's advocacy of Gandhi as being the benchmark of sustainbility generation leaders In playing AIGames, we ask that your first move lists who's human intelligences you see as most advancing humanity since 1950. Of course a supplementary game would extend to intelligence legacies co-created before 1950 with Gandhi a prime source. Within this catalogue educators activating Gang=hian logics include Montessori & Freire- and overall Einstein's intents for future society.

Benioff 1 corporate leadership ai salesforce 

 berners lee 89 father www but why did he move from swizerland where G standards set to Boston and not west coast valley; extrordinaly brave work keeping open whereve big telecom tried to close but ultimately learbing web got 99% taken over by comerce web until human ai leaders reintegrated neural network a society's comeback in system design- see also negropronte mit media lab who seems to have had drams behind the scale that took over (of course some can argue that commercial monoplisation of digital would have been worse without this couple)

borlaug- 80 year cooperation action learning  curve still centre of gravity og sdg2 intelligence:billions of people were likely saved from starvation by borlaug's transformation of local agricultural producivity- The Economist once celebrated "the happiest graph in the world" as illustrating how Japan (first to update agri systems with borlaug and industrial systems with deming)  was prepared to share how its local farmers procuced 15 times more rice than eg cambodia all across the far east tropics the region with lowest life expectancy due to nutritional and dehydration crises causing up to one third of all infants to die; seed science tech offered microfranchising solutions-eg local rice production efficiency being the largest franchise replication across Asia's two thirds of human beings; given this momentum took off before the NET's gift of brainworking engines, seeing how life critical knowhow links beyond nations boundaries makes borlaug (eg global food prize) alumni an essential case catalogue for all intelligence designers 

Brilliant 1 so many passion stories why epidemiologists need a fan club even more than football (re your fav global entertainment); look at 3 worlds brilliant  life experience helps us see- how did brilliant end smallpox across asia before days of computer mapping - ie what manual network blueprint beat this infectious diseases; brilliant's deepest passion was ending unnecessary blindness - see his videos on india's aravind modeling its local solutions franchise on the consistency of mcdonalds; brilliant had started semi-0retirement on west coast after working life in asia when his ted talk demanded vieus chasing be a priority app of artificial intel algorithm experts; he was appointed as first ceo of google.org; the stories of how the valley's consciousness changed after steve jobs hosted fazle abed's 65th birthday wish party 2001 included why epidemiologists were called on to train vice chancellors of the new global university shared alumni of sdg graduates; fortunately there had also been one us university benefiting from design by epidemiologist - ie swarthmore; more generally fans od last mile health intelligence estimate the world is currently short of training 100 million last mile heath servants - see also glasgow adam smith scholars attempts to design virtually free nursing colleges; or tuition free med school at nyu; when larry was a 20 something he walked into a detroit training hospital with some dehydration; was asked if he'd also like to train to be a doctor with zero risk of student debt; in that regard us higher ed was far smarter in sustaining next generation 1960s than it is today; one more story brilliant's first job medical adviser to the band wavy gravy; after a hard year's tour circa 1968 the band went to meditate with their fav guru in the afghan hills; in those days brilliant recalls the grapevine even in the worlds most remote places was full of positive chat if americans can land on moon, soon no humaitarian mission down on earth will be impossible

 Brynjolfsson Erik western macroeconomics (as keynes feared - see warning last chapter general theory has failed 21st c humans on all exponential sustainability challenges- erik is one of the few who may be able to translate how ai decisions niw dwarf anything economis... erik, reich robert and Amy Zegart who works with Rice Condoleeza (Hoover Stanford)

chang maurice - one of the 4 western acknowledged greats in chip design making taiwan largest chip manufacturer until nvidia's recent challenge - of course intel started chips as a 100 times multipliuer per decade of brain machines; arm in uk may sneak in 4th; 

chen jerry 1, greylock

csikszentmihalyi https://www.google.com/search?q=csikszentmihalyi+ai Prof csik's reserach at claremont on genius showed they maximise time spent at experiential edge of their unique knowhow; 70 years ago einstein argued that transforming personalised education with AI would determine good or bad outcomes for our species. One of the biggest risks to our species appears to be educators unable to change their system and all blocks to accepting partners in transformation. The luminaries approach of hong kong yidan prize is one experiment in changing this  but open ai exchanges have run into political barriers  https://www.universitas.ai/global

Dean Jeff - founded google brain (various generative ai tools from this group include transfrmer breakthrough) ; brain and hassabis deep mind became one overall unit 2023 ; bard says dean is google contact person- eg start artists wanting to connect food and good ai

Deming e

doudno genes database crispr berkeley connector of fei-fei since 2013- a lead reviewr of ffl's world i see  (// koller daphne) -4th = womens AI 2023 (1 Fei-fe li 2 priscilla chan 3 Melinda gates)

East Hauser & Sophie Wilson put UK on the world tech map primarily with ARM chips which they conveged to make cambridge eccosystem bigger than eg oxford - see bard notes. Also arm's curent ownership somewhere between japan softbank and nviduia is in flux but nvidia has said it will maintain arm in cambridge. Notably elon musk at sunak summit called arm chips good; today hassabis deep mind may be the uk ecosystem superstar but brits can be proud of the cambridge ecosystem - particularly as things looked quite bleak when eg ICL went nowhere from the 1980s

Eshoo Anna1represents Stanford region in DC leads create AI Act

esteva andre - around 2015 helped fei-fei li monitor when computer vision of objevcts emulated human vision error rates--now in medical deep learning ai

estonia has arguably made better investment in government suppirting human intelligence than anywhere with relatively limited resources- this also reminds us that human made intelligence is not purposefully valued as an individual dynamic even if our brains are right to want to know which human is most influencing any time we spend either learning or teaching - tallin jaan is one of estonia's intel superstars

Etchmendy1 role co-founder stanford HAI appears to be making sure every discipline and discipline sponsor joins in human ai; also neighbor ro FFL

Freire - culturally credited eg Bangladesh as one of transformative education's sources for the human develpment miracle co-worked by up to billion asian village women povert alleviators; in 1960s latin americans inspired by us mission impossible moon race - debated if we're entering mission impossible ae, what root cukture unites us; franciscan led the overall consensus; freire soon became guide of radical education matching franciscan servant leadership and ecological roile of franciscan males and matenal health of clares

Fridman Lex 1- who's who of intelligence in podcast/youtube and at MIT lectures

1 gates bill 84 changed the world by commercialisimg language ie language needed to coded standard personal computers; later 3g western worlds largest giving partnership (including buffett & )

gates melinda 015 -started to design her own foundations around womens intel (both deep community health building and lead womens ai and every women celebrating womens innovations in between 

gifford-pinchot- sadly after 1976's 25th celebration of NET (Economist's Entrepeneurial Revolution) most US social entrepreneur variants missed ER's priority on scaling and livelihood transfotaion first renewable generation needed to celebrate; two exceptions gifford pinchot intrapreneurship, and those who understood transformation of aid economics arounfd microfranchising - see abed& brilliant (we are all aware of eg mcdonalds macrofranchising; microfranchising replicates team solution with as much attention as mcdonalds but ssigns all or most of value of production to local teams local community economy not suking out prof=it from local ro ever fewer global owners)

gomes aidan- in part tutured by Hinton as well as out of oxford, changes scaling leaps of gen ai with  transformers etc - aged 20 https://aidangomez.ca/ now co-founder cohere canada see linkedin https://www.linkedin.com/in/aidangomez/?originalSubdomain=ca

hoffman reid- in humansai top 50? intellectually probably superb funding of hoffmann-yee ai student projects; recall heck of a lot of us money around either through property or backing ai- not obvious hoffman in top 50 in spite of founding linkedin sold to microsoft -compare eg doehrr new climate school stanford - question which n am spaces get race to humanise ai - valley, seattle, parts of canada where's 4th

hopfield, received his A.B. from Swarthmore College in 1954, and a Ph.D. in physics from Cornell University in 1958 (supervised by Albert Overhauser). He spent two years in the theory group at Bell Laboratories, and subsequently was a faculty member at University of California, Berkeley (physics), Princeton University (physics), California Institute of Technology (chemistry and biology) and again at Princeton, where he is the Howard A. Prior Professor of Molecular Biology, emeritus. For 35 years, he also continued a strong connection with Bell Laboratories.1996: 40 years on from Von Neumann one of only 2 people to champion neural networks at yale silliman lectures

In 1986 he was a co-founder of the Computation and Neural Systems PhD program at Caltech.

hughes  nick 07 -bottom of pyramid intelligence (foundation 1 fazle abed) - origin of designing /scaling mpesa village phone out of kenya (coroprat ), teamd with quadir family to continue bottom p

ibrahim lila now coo deepmind london with hassabis previously coursera, and barrett's team intel - see keynote aigood/itu gemevat 2023 4th= womens ai

jobs steve 84 01 08

kim JY - mid 80s started www.pih.org with pail farmers 80s; farmer doing grad studies harvard anthropolgy of mefivime while shuttling to/from from haiti startup; by 2001 kim was main partnership redesigner of global fund - pih fazle abed gates bush brilliant martha chen etc - jyk sked by obama to head world bacnk 2012- immediately set about raising younger half of world voice in multilaterals; by 2016 testified how goal 2 education would happen this century unless un ed celebrated what un tech know and both adapted to last mile solution scaling/training histirically siloised by different un branches eg goal 2 food rime, goal 3 heath who geneva, golal 4 diffused between eg unesco paris, unicey ny and digitally un nowhere as of 2016- an extrardinary reversal of 1920s when einstein still coordinated future of worldwide intel coop out of europe before escaping hitler by moving to princeton  ...

Ka-Shing one of top 5 new pro-youth university designers - see building stanfird- city campuses eg ny beijing ; many partnership HK universities- arguably hong kong's greatest philanthropist of our era

koller vid1 stanford bioscience ai but also coursera cofounder ng 23 suddenly ai and quality data biotech agri climate ...

 Krizhevsky 12 Alex with Ilya  Sustever mentored by hinton won 2012 imagenet competition- leap to deep learning (Alexnet daya subset )

Kuan Yew Singapore- more intelligent leader 7 million person island ever seen- when uk dumped singapore, yew focused on 1 jobs, 2 loving every culture, 3 good homes for everyone- singapore seems to be the one city in the world where hinancial housing bubbles have never happened and nextgeneratoin citizens support each other; singapore has also morally supported 10 asean neighbors

Lee Kai-Fu - interesting that his best-seller as recently as 2017 barely covered the big ai leaps exciting everyone in 2023; to be fair many of lee's cases are asian where there has been more blending of big businss ai and societal advances but overall this book demonstrates why wizard ai breakthroughs ate as yet designed by relatively few people and specific ecosystems such as those around stanford

Liang 1  A Manning NLP stanford- foundation models have become pivotal and Percy Liang (see also 24 october gen ai jealth summit unsummitfuture.com is a s connected as anyone at stanford in maintaining the deep ethics issues these models require; he teams up with Reich who leads stanford ethics institute-- founded together ai https://together.ai/about stack of open ai tools academic led but with key industry sponsors eg nvidia see about

manyika james  1, co-chair guterres AI risk-nations panel, at google hq Senior Vice President of Technology and Society at Google -new post reporting to ceo pichai .development of Google's AI Principles, the creation of the Google AI Research team, and the launch of the Google AI for Social Good initiative.a Zimbabwean-American academic, consultant 2 decades at mckinnsey, and business executive. He is known for his research and scholarship into the intersection of technology and the economy, including artificial intelligence, robotics automation, and the future of work. 

mohammeed shakir, at deepmind - one of pioneersof decolonial ai

negropronte mit media lab including 100$ laptop- stiry media lad began when mit architectire school asked how will our future blend with coming of digital; unlike top down policy makets , architects/engineres have to get foundations deep; they have to gravitate a generation of partnerships not juts get a first 90 days of operations profitable

Owen 84 open space technology- massively connecting emotional intelligence- eg love is -at cusp of scaling greatest community building 1984 (last real only global village townhalls, first blended.. )

(previously conference organiser system transformation: eg see ackoff- originally peace corps and training to be anglican priest when action crises overtook ..)

Perona, Pietro - back in the early 2000s - one of the few open minded supervisors of doctorates on vision connections with AI - tutor fei-fei li out of caltech- see breakthrough 2005 paper

Quadir family 96 - leaders of possibility of 2g and 3g for very poorest  women who leapt into mobile coms and solar having never seen phone or electricity in grid only age -consistent supporters telenor soros legatum abdul latif -consuent entrepreneurs fazle abed, reeta roy 

Rice Condoleezza 1 at stanfords governance institute Hoover, Rice says Li's is best (intelligence) book she has ever seen  https://setr.stanford.edu/sites/default/files/2023-11/SETR_web.pdf

Roy reeta 06 with mastercard foundation out of canada where blackberry was usable - most consistent pan-african youth investor applying eg knowhow of quadirs, abed ..and since 2012 first lady qatar moza sheika

shih, dreamforce talk clara salesforce ai -see interview fei-fei li - one of best first 3 weeks of book worlds i see demonstarting depth of both clara's and fei-fei love of advancecing ai and in this case skils augmentaion - host podcast - ask more of ai (stanfird alumn)

Suleyman Mustafa 1 fascinating character -

over the next few years everybody is going to have their own person...

been at start of many deep data projects as well as co-founder deep mind which he now seems to have left- from london now in ca with hoffman investment inflection ai and new book Coming Wave (back in london would be interesting to follow up his nih ai work and association with the economist)

Sutskever Ilya 1with Krikhevsky Alex won 2012 imagenet - whence deep learning

Sutton Richard Deep Mind & Deep Learning Reinforcement book :

  • Richard S. Sutton is a Canadian computer scientist and a Professor of Computing Science and AITF Chair in Reinforcement Learning and Artificial Intelligence at the University of Alberta, and also Distinguished Research Scientist at DeepMind.
  • Andrew G. Barto is a professor of computer science at University of Massachusetts Amherst, and chair of the department since January 2007. His main research area is reinforcement learning.

Tata, in 20th c the greatest dev miracles seem to have been built by intergenerational design of inclusive business models at least 51% in trust of national advancemenr of all inluding historically most marginalised - this explains banglagesh's human dev odel eg by dazle abed and tata appears main exemplar in Indian

torvaulds kept 3g-4g age open sourced

EA- 10 years ago i was late visting uae (about 20th Asian country on my family's 60 east-eest conuntries X; lesson 1 uae hosts world class summits and then chooses innovation expertises it wants to add- it will make some great ai choices -see eg selectors in chief eg  Amar ALolama

Ultrasun Raquel 1 - now waabi & co-director vector toronto - previously chief scientist uber

Vogel - Eastern two thirds of humanity connected by consciousness culture different from west's golden rule- i fing Ezra Vogel far deepest diarist of orients last 2000 years translated for western appeciation

Wales Jimmy 01 credited with inventing wikipedia with larry sanger; demonstartes what could have been done as does eg khan academy later if the broken system (Un sibce 2016 Eg JYKim testimony sept 2016- originn of un2 roadmapping and now guterres tech envvoy which is us student endebted 21st c education had transferred 0.5% to digital every year since 1989

Williams neville us longest running green entrepreneur- carters envoy for sustainable energy follwed up by barefoot solar partnerships across many nations (see self)

Wilson picks up melinda gates relay baton with dear black girl published after worlds i see

Yat siu's approach led out of hong kong is unique and one of the few to keep nft coomunity finace valid -see yat siu entry at world record jobs

Yunus Monica 01+ after 9/11 decided to do more that develop as oper sinager starting www.singforhope.org and carefully linking in most of ny's most committes music partners (Juillard +); from 08 saw obam versus clainto versus her dad muhamad yunus fail to unite around resolb=ving subprime crisis in favor of younger half of world; has h=kepmarts out of the greenwasj=hing politics that has since spun,..to be very clear muhamad yusnu top 10 concepts of 2008- eg a shared belt road corido or myanmar bangladesh china were absolutely essential to region but the partner fudning and busienss modeling never even got started

Zhao Houlin - launched itu into aifirgood portal (cornerstone of un tech envoy emergence from 2016 when ed review of sdg4 said impossible with out redesigning un2 orund digital cooperation) and summit from 2017 during his last 6 years heading itu after decades connecting out of geneva


z corporates -i will be doing more reserch on the corporate list who attended eun ai safety summit nov 2023- I am expecting for example many fortune 500 companies to have larger ai budgets by 2027 than most nations - which is one reason why we primarily catalogue persons and their alumni who are innovation ai good- we welcome discussion if you think we are missing way to see whose intelligence worlds partnerships are designing ai to get back to the goals all 193 nations of UN promised to unite humans round in 2015 - as yet we see far more risk that nations are not going tp invest in in enough good ai than that too much ai will take over from humanity- we also remind non native english speakers - articificial applied to engines = man-made

  • Adept
  • Aleph Alpha
  • Alibaba
  • Amazon Web Services
  • Anthropic
  • Apollo Research
  • ARM
  • Cohere
  • Conjecture
  • Darktrace
  • Databricks
  • Eleuther AI
  • Faculty AI
  • Frontier Model Forum
  • Google DeepMind
  • Google
  • Graphcore
  • Helsing
  • Hugging Face
  • IBM
  • Imbue
  • Inflection AI
  • Meta
  • Microsoft
  • Mistral
  • Naver
  • Nvidia
  • Omidyar Group
  • OpenAI
  • Palantir
  • Rise Networks
  • Salesforce
  • Samsung Electronics
  • Scale AI
  • Sony
  • Stability AI
  • techUK
  • Tencent
  • Trail of Bits
  • XAI

Views: 398

Reply to This

Replies to This Discussion

here are some of deep learning ai courses orchestrated by Andrew NG -most are free or mnearly free- in some ways deep learning ai sees to be a non college branch patterns of coursera akso founded by andrew ng and daphne koller

This is bard summary of ng in this context Andrew Ng founded DeepLearning.AI in 2017. He is a machine learning and education pioneer who previously served as the Chief Scientist of Baidu and is currently the Chairman and Co-founder of Coursera and Adjunct Professor of Stanford University. DeepLearning.AI was created to address the need for world-class AI education and has gained a large global following through its high-quality AI programs on Coursera.
As of 23-24 Gamifiers of AI see NG as a leader of YOUTH AI by which we mean connecting millennials and transforming universities to value student changing intelligence needs- as those who read adam smith will know the history of english speaking nbiversities is not generally about serving students most innovative needs rather the comforts of administrators; from 2001 steve jobs and fazle abed asked whether Valley Universities would change this in time for ai oi help save humanity and deliver UN sdgs. Those who have read fei-fei li new book worlds I see will note her testimony that NG demanded Stanford headhunt her at a time when even her alma mater Princeton was saying imagenet was too big too long term too off piste from srandard academia

previously we had discussed how ai2 approach to covid literature review might also apply to a climate review; i suppose there is=nt any new info on this in last 6 weeks?

View other drafts

Yes, the AI2 approach to COVID literature review could also be applied to a climate review. The AI2 approach uses a combination of natural language processing (NLP) and machine learning (ML) to identify and extract key information from scientific literature. This approach could be used to identify and extract information about climate change, such as the causes, impacts, and potential solutions.

Here are some of the specific ways that the AI2 approach could be applied to a climate review:

    • Identifying relevant literature: The AI2 approach could be used to identify relevant literature from a variety of sources, such as peer-reviewed journals, government reports, and news articles.

    • Extracting key information: The AI2 approach could be used to extract key information from the literature, such as the study's objectives, methods, results, and conclusions.

    • Synthesizing information: The AI2 approach could be used to synthesize information from multiple sources to create a comprehensive overview of the climate change literature.

    • Analyzing trends: The AI2 approach could be used to analyze trends in the climate change literature, such as the increasing number of studies on the impacts of climate change.

The AI2 approach has a number of advantages for climate review. It is a scalable approach that can be used to review large amounts of literature quickly and efficiently. It is also an objective approach that can eliminate human bias from the review process.

Here are some examples of how the AI2 approach has been used to review climate change literature:

    • The AI2 Allen Institute has used the AI2 approach to create a climate change knowledge graph. The knowledge graph is a network of interconnected concepts that can be used to explore the relationships between different topics in climate change.

    • The University of California, Berkeley has used the AI2 approach to identify and extract information about climate change impacts from the peer-reviewed literature. The extracted information was used to create a database of climate change impacts that can be used by researchers and policymakers.

Overall, the AI2 approach is a promising tool for climate review. It has the potential to help us to better understand the causes and impacts of climate change and to develop effective solutions.

do you have any insights into work of Josh Tenenbaum at mit ; he's recent schmidht a12050 laureate- also anyone at ai2050 stand out on most urgent collaborations next year needs if ai is to maximise connections with general assembly year of global futures?

View other drafts

Yes, I have some insights into the work of Josh Tenenbaum at MIT. He is a renowned cognitive scientist and computer scientist who has made significant contributions to our understanding of human cognition and artificial intelligence. His work has spanned a wide range of topics, including concept learning, categorization, analogy, causal reasoning, and decision-making. Tenenbaum is also a pioneer in the field of Bayesian cognition, which uses the principles of Bayesian probability to model human reasoning and learning.

One of Tenenbaum's most notable contributions is his work on the concept of "core knowledge." He argues that humans possess a set of core knowledge about the world that is acquired early in life and that serves as a foundation for subsequent learning. This core knowledge includes concepts such as objects, agents, space, time, and causality. Tenenbaum's work on core knowledge has been highly influential in the field of cognitive science and has helped to shape our understanding of how humans learn and reason.

In recent years, Tenenbaum has been focusing on the development of artificial general intelligence (AGI). AGI is a type of AI that would be capable of intelligent behavior in any situation. Tenenbaum believes that AGI is achievable and that it will require a deep understanding of human cognition. He is currently working on developing AI algorithms that are inspired by the principles of human learning and reasoning.

Tenenbaum's work is highly relevant to the goals of AI2050, an initiative that aims to ensure that AI is developed and used in a way that benefits all of humanity. AI2050 has identified a number of urgent collaborations that are needed next year to maximize connections with the General Assembly Year of Global Futures. These collaborations include:

    • Developing AI frameworks for ethical and responsible AI development and deployment.

    • Promoting AI for social good and addressing global challenges such as climate change, poverty, and disease.

    • Enhancing AI literacy and education to ensure that everyone has the skills and knowledge they need to thrive in an AI-driven world.

    • Building trust and transparency in AI systems.

    • Addressing the potential risks of AI, such as job displacement and misuse of AI for malicious purposes.

Tenenbaum's work on human cognition and AGI is directly relevant to several of these urgent collaborations. His insights into how humans learn and reason can be used to develop AI algorithms that are more ethical, responsible, and beneficial to society. He is also a strong advocate for AI literacy and education, and he believes that everyone should have the opportunity to learn about AI and its potential impacts.

I believe that Tenenbaum will continue to make significant contributions to the field of AI and to the goals of AI2050. He is a visionary scientist who is committed to using AI for the benefit of humanity.

Bio Jeffrey Dean: I joined Google in mid-1999, and I'm currently Google's Chief Scientist, focusing on AI advances for Google DeepMind and Google Research. My areas of focus include machine learning and AI and applications of AI to problems that help billions of people in societally beneficial ways. Interests include machine learning, large-scale distributed systems, computer systems performance, compression techniques, information retrieval, application of machine learning to search etc, microprocessor architecture, compiler optimizations, and development of new products that organize information in new and interesting ways. See Google Scholar page  research papers I co-authored.

In 2011, I co-founded the Google Brain project/team, focused on making progress towards intelligent machines. Since then, my focus: research, systems and applications for AI and ML, & steering the direction of our broader AI/ML and computer science research community.  I’ve had the great pleasure to write a blog post early each year summarizing many pieces of the public work done by amazing colleagues and researchers over the previous year in our research teams.

Some of the areas I’ve collabed on in AI and ML include:
  • Research leadership. Steering the research directions of the Google Brain team, Google Research, and now Google DeepMind (with many others!). See year-end blog post links above for more details about this, which includes advances in things like the Transformer architecture, machine learning systems (DistBelief, TensorFlow, Pathways), TPUs, the Inception model, word2vec, seq2seq models, neural machine translation, distillation, neural architecture search/AutoML, RankBrain, BERT, TensorFlow, JAX, Pathways, PaLM, PaLM 2, PaLI, PaLM-E, MedPalm, NeRF, quantum computing advances, ML for chip design, computational photography (e.g. Night Sight & Magic Eraser), flood forecasting, Responsible AI research areas like bias, fairness and interpretability, medical diagnostics, auction theory, open source software and datasets, accessibility, weather forecasting, ML for robotics, connectomics, genomics, and more, as well as research impact in products across nearly all of Google, including Search, Ads, YouTube, GMail, Workspace, Maps, News, Photos, Translate, Android, Cloud, Pixel, Waymo, and many more products.

  • Computer systems for ML. The design and implementation of three generations of systems for training and deploying of deep learning models: DistBelief, TensorFlow, and Pathways.

    In DistBelief, we explored large-scale, highly distributed systems and asynchronous training algorithms to enable ML models to be trained on large amounts of data, even on the relatively slow, non-ML-optimized hardware of the time (we trained models with 2B non-embedding parameters at a time when the largest models reported in the literature were 10M to 50M parameters). The system was used for hundreds of projects within Google and had widespread use across many Google products. Some of the earliest research work we did using DistBelief was exploring unsupervised learning on video frames to see what sorts of representations would emerge, in Building high-level features using large scale unsupervised learning, a.k.a "the cat neuron paper". We also used DistBelief to develop word2vec, various speech recognition models, multimodal work like DeViSE, and early embedding models like RankBrain.

    TensorFlow: I was a primary designer of the initial TensorFlow system. I made the case that we should open-source Tensorflow from 2015, hosted on GitHub. It is used by millions of researchers and developers all over the world for exploring and creating ML and AI systems on platforms ranging from tiny embedded systems, to phones, desktop computers, and ML supercomputers. See Tensorflow: Large-scale machine learning on heterogeneous distribut... (white paper) and TensorFlow: A System for Large-Scale Machine Learning (OSDI 2016).

    Pathways is designed to support large-scale, multimodal, sparse architectures that are capable of solving thousands or millions of tasks. I was one of the original designers , and a paper about the systems research aspects of Pathways appeared in MLSys 2022 as Pathways: Asynchronous Distributed Dataflow for ML. The underlying system software has been used for work like the PaLM language models (which underlie work like Med-PaLM, PaLM-E for robotics, PaLI, and other downstream uses).

  • Language modeling. Language modeling: starting with work in 2007 that trained 300 billion parameter language models on trillions of tokens of text (Large language models in machine translation), demonstrating significant improvements in translation quality.

    Co-author pair of papers that introduced an approach of learning distributed representations of words that is now commonly called word2vec (Efficient estimation of word representations in vector space and Distributed representations of words and phrases and their composit...).

    I helped to convert the Google Translate system over to using a neural machine translation system. See Google’s neural machine translation system: Bridging the gap betwee... (2016) and Google’s multilingual neural machine translation system: Enabling z.... Gideon Lewis-Kraus of The NY Times magazine wrote an in-depth feature about the rollout of the neural machine translation system in Google Translate in The Great AI Awakening.

    Part of the infrastructure work on Pathways is designed to enable scaling training of larger models on larger and more diverse datasets. PaLM language model work: I am one of the co-leads of the Gemini effort, which is building next-generation multimodal models that can use tools and APIs to enable more capable models that can be used in a variety of Google products and application areas.

  • Distillation.  co-creator of a machine learning technique called distillation, widely-used approach for transferring the knowledge from one neural network to another. It is often used to create smaller, much more efficient models for inference from larger, more unwieldy models, and to transfer knowledge from one neural network architecture to a completely different architecture. See Distilling the Knowledge in a Neural Network.

  • Sparse models. Work on sparse model architectures for neural networks, including Outrageously large neural networks: The sparsely-gated mixture-of-e... (2017) and Designing Effective Sparse Expert Models.  A Review of Sparse Expert Models in Deep Learning.

  • AI for ASIC chip design.  worked on research on how to apply reinforcement learning to problem of placement and routing in ASIC chip design. Shown that it is possible to get performance that is as good or better than human performance on the problem of chip floorplanning in a system that runs in a few hours. Published in Nature and has been used for multiple generations of Google’s TPU ML accelerators.

    ML for healthcare. Use of AI and machine learning in healthcare setting:  work showing that machine learning on deidentified medical records can produce useful and actionable suggestions for clinicians, published as Scalable and Accurate Deep Learning with Electronic Health Records. Google works on applying machine learning across many different problems in health, including medical imaging diagnostics, genomics, medical note transcription and summarization, and novel sensing (see health sections of year-in-review blog posts above). Review articles in this space:1) assessed some promising directions for integrating deep learning into healthcare settings published in Nature Medicine as A Guide to Deep Learning in Healthcare. 2) NEJM article titled Machine Learning in Medicine.

  • ML for computer systems. Use of machine learning for tackling computer systems problems. eg device placement using reinforcement learning to map abstract ML computation graphs onto a set of physical devices to give the best performance (follow-on work on a hierarchical version of this),  use of learned index structures in database systems instead of traditional data structures like B-trees and hash tables.

  • Energy efficiency of machine learning. Forwarded Google’s TPU efforts, identifying fairly early in the widespread use of deep learning that creating efficient systems was going to require building customized accelerator hardware, leading to a long line of TPU processors. TPUv1 (In-datacenter Performance Analysis of a Tensor Processing Unit) targeted inference computations and was about 30X - 80X better performance/Watt than contemporary CPUs and GPUs. Subsequent TPU generations target both training and inference in large-scale ML accelerator systems and are crucial to much of the machine learning research and product applications of ML at Google. They are available to external entities as Google Cloud TPUs.

    Carbon emissions of machine learning training is an area that is rife with misinformation due to the prevalence of flawed and inaccurate estimates, so I have also worked with others to correct some of this misinformation and put actual measured data into the literature. See Carbon emissions and large neural network training, especially appendices C and D, and The carbon footprint of machine learning training will plateau, the... (if ML researchers adopt best practices). I gave a talk on some of these issues at the 2022 MIT Climate Impacts of Computing and Communications workshop.

Google Search. The design and implementation of five generations of our crawling, indexing, and query serving systems, covering two and three orders of magnitude growth in number of documents searched, number of queries handled per second, and frequency of updates to the system. We did not publish research papers on most aspects of this, but I gave a talk at WSDM'09 about some of the issues involved in building large-scale retrieval systems (slides).
Search ranking algorithms. Some aspects of our search ranking algorithms, notably improved handling for dealing with off-page signals such as anchortext.
Search ranking prototyping system. The design and implementation of prototyping infrastructure for rapid development and experimentation with new ranking algorithms.
MapReduce. The design and implementation of MapReduce, a system for simplifying the development of large-scale data processing applications. A paper about MapReduce appeared in OSDI'04. MapReduce is used extensively within Google, and provided the inspiration for external open-source projects like Hadoop, as well as follow-on projects like Flume.
BigTable. The design and implementation of BigTable, a large-scale semi-structured storage system used underneath a number of Google products. A paper about BigTable appeared in OSDI'06. BigTable is used by hundreds of teams at Google and sits underneath dozens of products. It is available externally as Cloud Bigtable.
Spanner. The design and implementation of Spanner, a geographically-distributed worldwide storage system that can provide strong consistency guarantees through the use of Paxos and highly synchronized clocks in multiple data centers. A paper about Spanner appeared in OSDI’12. Spanner is used extensively for hundreds of projects within Google, underlies a large fraction of our products, and is available for external uses as Google’s Cloud Spanner product.
Google Ads. I was part of a group of three people who did the design and implementation of the initial version of Google's advertising serving system.
AdSense. The initial development of Google's AdSense for Content product (involving both the production serving system design and implementation as well as work on developing and improving the quality of ad selection based on the contents of pages).
Protocol buffers. The development of Protocol Buffers, a way of encoding structured data in an efficient yet extensible format, and a compiler that generates convenient wrappers for manipulating the objects in a variety of languages. Protocol Buffers are used extensively at Google for almost all RPC protocols, and for storing structured information in a variety of persistent storage systems. A version of the protocol buffer implementation has been open-sourced and is available at https://github.com/protocolbuffers/protobuf/, and a developer site with documentation and more details is at https://protobuf.dev/.
Google News. Some of the initial production serving system work for the Google News product, working with Krishna Bharat to move the prototype system he put together into a deployed system.
Job scheduling system. The design and implementation of the first generation of our automated job scheduling system for managing a cluster of machines.
Timeseries analysis system. The initial design and implementation of a system for analyzing complex timeseries data. This system is used extensively by dozens of Google teams to support various use cases like suggested completions, recommendations, etc. The system is available for Cloud customers to analyze their own datasets via the Timeseries Insights API.
Google Translate. Some of the production system design for Google Translate, our statistical machine translation system. In particular, I designed and implemented a system for distributed high-speed access to very large language models (too large to fit in memory on a single machine), and then later helped with the transition to using neural machine translation models.
LevelDB. The design and implementation of LevelDB, a high performance key-value store that we released as an open-source project. It is used in a wide variety of projects including Google Chrome.

Selected slides/talks:Code search. Some internal tools to make it easy to rapidly search our internal source code repository. Many of the ideas from this internal tool were incorporated into our Google Code Search product, including the ability to use regular expressions for searching large corpora of source code.
I enjoy developing software with great colleagues, and I've been fortunate to have worked with many wonderful and talented people on all of my work here at Google. To help ensure that Google continues to hire people with excellent technical skills, I've also been fairly involved in our engineering hiring process.

I received a Ph.D. in computer science from the University of Washington in 1996, working on compiler optimizations for object-oriented languages advised by Craig Chambers. I received a B.S. in computer science and economics (summa cum laude) from the University of Minnesota in 1990 (doing honors theses on parallel training of neural networks and the economic impact of HIV/AIDS).

From 1996 to 1999, I worked for Digital Equipment Corporation's Western Research Lab in Palo Alto, where I worked on low-overhead profiling tools, design of profiling hardware for out-of-order microprocessors, and web-based information retrieval. From 1990 to 1991, I worked for the World Health Organization's Global Programme on AIDS, developing software to do statistical modeling, forecasting, and analysis of the HIV pandemic. In high school and during the summers in college, I worked first at the Centers for Disease Control and later at the World Health Organization developing a series of versions of software called Epi Info (wikipedia) for analyzing epidemiological data (still one of my most cited works).

In 2009, I was elected to the National Academy of Engineering, and in 2016, I was elected as a member of the American Academy of Arts and Sciences. I was also named a Fellow of the Association for Computing Machinery (ACM) and a Fellow of the American Association for the Advancement of Sciences (AAAS). I am a recipient of the ACM Prize in Computing (2012, with my long-time colleague Sanjay Gh..., the IEEE John von Neumann medal, and the Mark Weiser Award.

James Somers of the New Yorker wrote a delightful article in 2018 about me and my long-time collaborator Sanjay Ghemawat and how we work together: The Friendship That Made Google Huge.

Some of the papers I’ve co-authored with awesome colleagues have been fortunate enough to win various awards:
  • Outstanding Paper Award, MLSys 2022 (for Pathways: Asynchronous Distributed Dataflow for ML)
  • SIGOPS Hall of Fame Award, 2022 (for Spanner: Google’s Globally Distributed Database System at OSDI 2012)
  • Best Paper Award, EuroSys 2018 (for Dynamic Control Flow in Large-Scale Machine Learning)
  • SIGOPS Hall of Fame Award, 2016 (for Bigtable: A Distributed Storage System for Structured Data)
  • SIGOPS Hall of Fame Award, 2015 (for MapReduce: Simplified Data Processing on Large Clusters)
  • Best Paper Award, OSDI 2012 (for Spanner: Google’s Globally Distributed Database System)
  • 10-year Retrospective Most Influential Paper Award from OOPSLA 2007 (for Call Graph Construction in Object-Oriented Languages, 1997).
  • Best Paper Award, OSDI 2006 (for Bigtable: A Distributed Storage System for Structured Data)
  • 10-year Retrospective Most Influential Paper Award from PLDI 2005 (for Selective Specialization for Object-Oriented Languages, 1995)
  • Best Paper Award, SOSP 1997 (for Continuous Profiling: Where Have All the Cycles Gone?)


Places in my life: Honolulu, HI; Manila, The Phillipines; Boston, MA; West Nile District, Uganda; Little Rock, AR;  Minneapolis, MN; Mogadishu, Somalia; Atlanta, GA; Minneapolis (again); Geneva, Switzerland; Seattle, WA; and (currently) Palo Alto, CA.

see transcript of video https://www.youtube.com/watch?v=EGDG3hgPNp8

very good conversation Sébastien Bubeck and Yann LeCun Moderator: Brian Greene with Tristan Harris asming whether trust not money is the exponential design key( its the human -see sebastein's goodness corrections approx minute 94 superficial approach to externalities that can turn good cancerous 

also-see lancun at about minute 83  & 109

Extract from 2023 paper update of AGI by Shane Legg et al who may have coined agi around 2008


Levels of AGI: Operationalizing Progress on
the Path to AGI
Meredith Ringel Morris1
, Jascha Sohl-dickstein1
, Noah Fiedel1
, Tris Warkentin1
, Allan Dafoe1
Aleksandra Faust1
, Clement Farabet1 and Shane Legg1
1Google DeepMind
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence
(AGI) models and their precursors. This framework introduces levels of AGI performance, generality,
and autonomy. It is our hope that this framework will be useful in an analogous way to the levels of
autonomous driving, by providing a common language to compare models, assess risks, and measure
progress along the path to AGI. To develop our framework, we analyze existing definitions of AGI, and
distill six principles that a useful ontology for AGI should satisfy. These principles include focusing on
capabilities rather than mechanisms; separately evaluating generality and performance; and defining
stages along the path toward AGI, rather than focusing on the endpoint. With these principles in mind,
we propose “Levels of AGI” based on depth (performance) and breadth (generality) of capabilities, and
reflect on how current systems fit into this ontology. We discuss the challenging requirements for future
benchmarks that quantify the behavior and capabilities of AGI models against these levels. Finally, we
discuss how these levels of AGI interact with deployment considerations such as autonomy and risk, and
emphasize the importance of carefully selecting Human-AI Interaction paradigms for responsible and
safe deployment of highly capable AI systems.
Keywords: AI, AGI, Artificial General Intelligence, General AI, Human-Level AI, HLAI, ASI, frontier models,
benchmarking, metrics, AI safety, AI risk, autonomous systems, Human-AI Interaction
Artificial General Intelligence (AGI)1
is an important and sometimes controversial concept in computing
research, used to describe an AI system that is at least as capable as a human at most tasks. Given the
rapid advancement of Machine Learning (ML) models, the concept of AGI has passed from being the
subject of philosophical debate to one with near-term practical relevance. Some experts believe that
“sparks” of AGI (Bubeck et al., 2023) are already present in the latest generation of large language
models (LLMs); some predict AI will broadly outperform humans within about a decade (Bengio et al.,
2023); some even assert that current LLMs are AGIs (Agüera y Arcas and Norvig, 2023). However, if
you were to ask 100 AI experts to define what they mean by “AGI,” you would likely get 100 related
but different definitions.
The concept of AGI is important as it maps onto goals for, predictions about, and risks of AI:
Goals: Achieving human-level “intelligence” is an implicit or explicit north-star goal for many
in our field, from the 1955 Dartmouth AI Conference (McCarthy et al., 1955) that kick-started the
1 There is controversy over use of the term “AGI." Some communities favor “General AI” or “Human-Level AI” (Gruetzemacher and Paradice, 2019) as alternatives, or even simply “AI” as a term that now effectively encompasses AGI (or soon
will, under optimistic predictions). However, AGI is a term of art used by both technologists and the general public, and is
thus useful for clear communication. Similarly, for clarity we use commonly understood terms such as “Artificial Intelligence”
and “Machine Learning,” although we are sympathetic to critiques (Bigham, 2019) that these terms anthropomorphize
computing systems.
Corresponding author(s): merrie@google.com
© 2023 Google DeepMind. All rights reserved
arXiv:2311.02462v1 [cs.AI] 4 Nov 2023
Levels of AGI: Operationalizing Progress on the Path to A

Bard 2023 on Jennifer Widom (below) - see also 

Jennifer Widom's contributions to AI and her work with Condoleezza Rice:

Jennifer Widom:

    • Dean of Stanford University's School of Engineering: Widom became the first woman to hold this position in 2022.

    • Computer scientist and researcher: Her expertise lies in data management, databases, and artificial intelligence.

    • Significant contributions to AI:
        • Pioneered research in data warehousing and stream processing: Her work laid the foundation for efficiently storing and analyzing large amounts of data, crucial for various AI applications.

        • Developed query languages for complex data structures: Her contributions enabled users to easily extract insights from diverse data sources, empowering both AI systems and human analysts.

        • Advanced data privacy and security techniques: Her research ensures responsible AI development, protecting user data and promoting ethical data practices.

    • Collaboration with Condoleezza Rice: Widom and Rice co-authored a series of articles explaining AI and its potential impact on various sectors, including healthcare, finance, and national security. They aim to bridge the gap between the tech world and policymakers, fostering informed discussions and responsible AI governance.

    • Top 10 Tech Multipliers of 2023: This list, compiled by MIT Technology Review, highlights transformative technologies with the potential to significantly impact society. Widom's research areas, such as data management and AI, are directly relevant to several of these multipliers, emphasizing her work's importance for the future.

Widom's contributions to AI are substantial and multifaceted. Her expertise in data management and query languages forms the backbone for many AI applications, enabling them to effectively process and analyze vast amounts of information. Her focus on data privacy and security ensures responsible AI development, safeguarding user data and promoting ethical practices.

Her collaboration with Condoleezza Rice further amplifies her impact by bringing AI knowledge to policymakers. This crucial step facilitates informed discussions and helps shape responsible AI governance frameworks for the future.

Overall, Jennifer Widom is a leading figure in the field of AI, and her work continues to shape the direction of this transformative technology.

CRFM Names Advisory Board for the Foundation Model Transparency Index

Five leaders will serve as the Advisory Board for a new initiative to measure and improve transparency in the foundation model ecosystem.

The Stanford Center for Research on Foundation Models (CRFM) is pleased to announce a 5-member advisory board for the Foundation Model Transparency Index (FMTI). The multi-disciplinary board brings together Princeton computer scientist Arvind Narayanan, Stanford legal scholar Daniel E. Ho, Harvard philosopher Danielle Allen, MIT economist Daron Acemoglu, and is chaired by Human Intelligence CEO Rumman Chowdhury. Together, these leaders have pioneered advances across fields and are world experts on the societal impact of AI.

CRFM is an interdisciplinary initiative born out of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) that aims to directly shape the responsible development and deployment of foundation models via community normsindustry standards, and public policy, including through efforts like the Foundation Model Transparency Index. In turn, the Index is a research initiative led by CRFM that aims to measure and improve transparency in the AI industry. The first version of the Index, launched in October 2023, scored 10 leading foundation model developers (e.g. OpenAI, Meta, Google, Anthropic) on 100 indicators of transparency. In short, the Index demonstrated the pervasive opacity that plagues the foundation model ecosystem: the average score was just 37 out of 100. The Foundation Model Transparency Index was covered by The AtlanticAxiosFortuneThe InformationThe New York TimesPoliticoRapplerReuters and Wired, among other outlets. As policymakers across the USEUCanada, and G7 consider disclosure requirements for foundation model developers, the Index is increasingly a canonical resource. 

The advisory board will work directly with the Index team, advising the design, execution, and presentation of subsequent iterations of the Index. Concretely, the Index team will meet regularly with the board to discuss key decision points: How is transparency best measured, how should companies disclose the relevant information publicly, how should scores be computed/presented, and how should findings be communicated to companies, policymakers, and the public? The Index aims to measure transparency to bring about greater transparency in the foundation model ecosystem: the board’s collective wisdom will guide the Index team in achieving these goals.

CRFM director Percy Liang says: “The initial release of the Transparency Index has shone a bright light on the status quo. The next challenge will be to maintain and evolve the Index as a trustworthy source as the foundation model ecosystem takes off, with the ultimate aim of improving the status quo.  We are excited and honored to have this illustrious, multidisciplinary board guide us through this next stage.” In that spirit, Acemoglu adds “I am excited to be part of the advisory board of FMTI because I am concerned that there is a general lack of knowledge about what generative AI models are doing, what data they are being trained on, to what extent this is infringing other people's property and creative rights, and how they are going to evolve. More transparency and accountability in the industry is a must to safeguard our future.”

Meet the New Board


Arvind Narayanan

Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He co-authored a textbook on fairness and machine learning and is currently co-authoring a book on AI snake oil. He led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).

Daniel E. HoDaniel E. Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, professor of political science, professor of computer science (by courtesy), senior fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), senior fellow at the Stanford Institute for Economic Policy Research, and director of the Regulation, Evaluation, and Governance Lab (RegLab). Ho serves on the National Artificial Intelligence Advisory Committee (NAIAC), advising the White House on AI policy, as senior advisor on Responsible AI at the U.S. Department of Labor and as special advisor to the ABA Task Force on Law and Artificial Intelligence. His scholarship focuses on administrative law, regulatory policy, and antidiscrimination law. With the RegLab, his work has developed high-impact demonstration projects of data science and machine learning in public policy.

Danielle AllenDanielle Allen is James Bryant Conant University Professor at Harvard University. She is a professor of political philosophy, ethics, and public policy and director of the Democratic Knowledge Project and of the Allen Lab for Democracy Renovation. She is also a seasoned nonprofit leader, democracy advocate, national voice on AI and tech ethics, distinguished author, and mom. A past chair of the Mellon Foundation and Pulitzer Prize Board, and former Dean of Humanities at the University of Chicago, she is a member of the American Academy of Arts and Sciences and American Philosophical Society. Her many books include the widely acclaimed Talking to Strangers: Anxieties of Citizenship Since Brown v Board of EducationOur Declaration: A Reading of the Declaration of Independence in Defense of EqualityCuz: The Life and Times of Michael A.Democracy in the Time of Coronavirus; and Justice by Means of Democracy. She writes a column on constitutional democracy for the Washington Post. She is also a co-chair for the Our Common Purpose Commission and founder and president for Partners In Democracy, where she advocates for democracy reform to create greater voice and access in our democracy, and to drive progress toward a new social contract that serves and includes us all. 

Daron AcemogluDaron Acemoglu is an Institute Professor of Economics in the Department of Economics at the Massachusetts Institute of Technology and also affiliated with the National Bureau of Economic Research, and the Center for Economic Policy Research. His research covers a wide range of areas within economics, including political economy, economic development and growth, human capital theory, growth theory, innovation, search theory, network economics and learning. He is an elected fellow of the National Academy of Sciences, the British Academy, the American Philosophical Society, the Turkish Academy of Sciences, the American Academy of Arts and Sciences, the Econometric Society, the European Economic Association, and the Society of Labor Economists.

Rumman ChowdhuryRumman Chowdhury is the CEO and co-founder of Humane Intelligence, a tech nonprofit that creates methods of public evaluations of AI models, as well as a Responsible AI affiliate at Harvard’s Berkman Klein Center for Internet and Society. She is also a research affiliate at the Minderoo Center for Democracy and Technology at Cambridge University and a visiting researcher at the NYU Tandon School of Engineering. Previously, Dr. Chowdhury was the director of the META (ML Ethics, Transparency, and Accountability) team at Twitter, leading a team of applied researchers and engineers to identify and mitigate algorithmic harms on the platform. She was named one of BBC’s 100 Women, recognized as one of the Bay Area’s top 40 under 40, and a member of the British Royal Society of the Arts (RSA). She has also been named by Forbes as one of Five Who are Shaping AI.

nlp and chats brought a second wave to revoklution in ai of deep learning connecting all those inspired by or inspiring fei-fei li 20 million databank imagenet ans what became hassabis deepmind open source of natuyres 200 million proyeins

the second wave - todays chats eats digital text - it hasnt yet got to as big equivalnmts in video (or other ways humans have communicated knowledge 

one reasonably grounded guide to chat revolution and co-inventor transformer model is aidan gomex- here's bard's review  

Contributions to NLP and the Transformer Model

Aidan Gomez is a prominent figure in the field of natural language processing (NLP) and artificial intelligence (AI). He has made significant contributions to both the theoretical understanding and practical applications of NLP, with a particular focus on the powerful transformer model.

Innovations in NLP:

    • Co-author of the original Transformer paper: In 2017, Gomez co-authored the groundbreaking paper "Attention Is All You Need," which introduced the transformer architecture. This model revolutionized NLP by relying solely on attention mechanisms to process and understand language, achieving significant improvements in tasks like machine translation and text summarization.
      Aidan Gomez Transformer paper

    • Exploring new attention mechanisms: Gomez has continued to push the boundaries of NLP, exploring new attention mechanisms that improve the performance and efficiency of transformer models. For example, he has worked on multi-head attention, which allows the model to attend to different parts of the input sequence simultaneously, and sparse attention, which reduces the computational cost of the model by focusing on the most relevant parts of the input.

    • Applications of NLP: Gomez is also interested in applying NLP to real-world problems. He has worked on projects that use NLP to improve search engines, generate creative text formats, and answer your questions in an informative way. For example, he co-founded Cohere, a company that builds tools and APIs to make NLP accessible to developers.

Contributions to the Transformer Model:

    • Significant role in the development of the transformer: While Gomez wasn't the sole author of the transformer paper, his contributions were crucial to its development. He worked closely with the other authors to refine the model's architecture and improve its performance.

    • Continued involvement in transformer research: Gomez remains actively involved in transformer research, contributing to new models and applications. He is a leading voice in the field and his work continues to shape the future of NLP.

Overall, Aidan Gomez is a highly accomplished researcher and entrepreneur who has made significant contributions to NLP and the transformer model. His work has not only advanced our understanding of language but also led to the development of powerful tools that are transforming how we interact with computers.

In addition to the points mentioned above, here are some other interesting facts about Aidan Gomez:

    • He received his Ph.D. in computer science from Stanford University, where he was a student of Yann LeCun, one of the pioneers of deep learning.

    • He is a recipient of the 2018 NVIDIA Prize in Computer Science for his work on the transformer model.

    • He is a frequent speaker at conferences and workshops on NLP and AI.

Reply to Discussion



unaiwho.docx version 6/6/22 hunt for 100 helping guterres most with UN2.0

EconomistDiary.com Friends20.com & EntrepreneurialRevolution.city select 2022's greatest moments for citizens/youth of NY & HK & Utellus

Prep for UN Sept 22 summit education no longer fit for human beings/sustainability


Since gaining my MA statistics Cambridge DAMTP 1973 (Corpus Christi College) my special sibject has been community building networks- these are the 6 most exciting collaboration opportunities my life has been privileged to map - the first two evolved as grassroots person to person networks before 1996 in tropical Asian places where village women had no access to electricity grids nor phones- then came mobile and solar entrepreneurial revolutions!! 

COLLAB platforms of livesmatter communities to mediate public and private -poorest village mothers empowering end of poverty    5.1 5.2 5.3 5.4 5.5  5.6

4 livelihood edu for all 

4.1  4.2  4.3  4.4  4.5 4.6

3 last mile health services  3.1 3,2  3.3  3.4   3.5   3.6

last mile nutrition  2.1   2.2   2.3   2.4  2.5  2,6

banking for all workers  1.1  1.2  1.3   1.4   1.5   1.6

NEWS FROM LIBRARY NORMAN MACRAE -latest publication 2021 translation into japanese biography of von neumann:

Below: neat German catalogue (about half of dad's signed works) but expensive  -interesting to see how Germans selected the parts  they like over time: eg omitted 1962 Consider Japan The Economist 

feel free to ask if free versions are available 

The coming entrepreneurial revolution : a survey Macrae, Norman - In: The economist 261 (1976), pp. 41-65 cited 105 

Macrae, Norman - In: IPA review / Institute of PublicAffairs 25 (1971) 3, pp. 67-72  
 Macrae, Norman - The Economist 257 (1975), pp. 1-44 
6 The future of international business Macrae, Norman - In: Transnational corporations and world order : readings …, (pp. 373-385). 1979 >
Future U.S. growth and leadership assessed from abroad Macrae, Norman - In: Prospects for growth : changing expectations for the future, (pp. 127-140). 1977 Check Google Scholar | 
9Entrepreneurial Revolution - next capitalism: in hi-tech left=right=center; The Economist 1976
Macrae, Norman -In: European community (1978), pp. 3-6
  Macrae, Norman - In: Kapitalismus heute, (pp. 191-204). 1974

. we scots are less than 4/1000 of the worlds and 3/4 are Diaspora - immigrants in others countries. Since 2008 I have been celebrating Bangladesh Women Empowerment solutions wth NY graduates. Now I want to host love each others events in new york starting this week with hong kong-contact me if we can celebrate anoither countries winm-wins with new yorkers



TWO Macroeconomies FROM SIXTH OF PEOPLE WHO ARE WHITE & war-prone




From 60%+ people =Asian Supercity (60TH YEAR OF ECONOMIST REPORTING - SEE CONSIDER JAPAN1962)

Far South - eg African, Latin Am, Australasia

Earth's other economies : Arctic, Antarctic, Dessert, Rainforest


In addition to how the 5 primary sdgs1-5 are gravitated we see 6 transformation factors as most critical to sustainability of 2020-2025-2030

Xfactors to 2030 Xclimate XAI Xinfra Xyouth Wwomen Xpoor chris.macrae@yahoo.co.uk (scot currently  in washington DC)- in 1984 i co-authored 2025 report with dad norman.

Asia Rising Surveys

Entrepreneurial Revolution -would endgame of one 40-year generations of applying Industrial Revolution 3,4 lead to sustainability of extinction

1972's Next 40 Years ;1976's Coming Entrepreneurial Revolution; 12 week leaders debate 1982's We're All Intrapreneurial Now

The Economist had been founded   in 1843" marking one of 6 exponential timeframes "Future Histores"


we offer worldwide mapping view points from

1 2 now to 2025-30

and these viewpoints:

40 years ago -early 1980s when we first framed 2025 report;

from 1960s when 100 times more tech per decade was due to compound industrial revolutions 3,4 

1945 birth of UN

1843 when the economist was founded

1760s - adam smithian 2 views : last of pre-engineering era; first 16 years of engineering ra including america's declaration of independence- in essence this meant that to 1914 continental scaling of engineeriing would be separate new world <.old world


IF we 8 billion earthlings of the 2020s are to celebrate collaboration escapes from extinction, the knowhow of the billion asian poorest women networks will be invaluable -

in mathematically connected ways so will the stories of diaspora scots and the greatest mathematicians ever home schooled -central european jewish teens who emigrated eg Neumann , Einstein ... to USA 2nd quarter of the 20th century; it is on such diversity that entrepreneurial revolution diaries have been shaped 

EconomistPOOR.com : Dad was born in the USSR in 1923 - his dad served in British Embassies. Dad's curiosity enjoyed the opposite of a standard examined education. From 11+ Norman observed results of domination of humans by mad white men - Stalin from being in British Embassy in Moscow to 1936; Hitler in Embassy of last Adriatic port used by Jews to escape Hitler. Then dad spent his last days as a teen in allied bomber command navigating airplanes stationed at modernday Myanmar. Surviving thanks to the Americas dad was in Keynes last class where he was taught that only a handful of system designers control what futures are possible. EconomistScotland.com AbedMooc.com

To help mediate such, question every world eventwith optimistic rationalism, my father's 2000 articles at The Economist interpret all sorts of future spins. After his 15th year he was permitted one signed survey a year. In the mid 1950s he had met John Von Neumann whom he become biographer to , and was the only journalist at Messina's's birth of EU. == If you only have time for one download this one page tour of COLLABorations composed by Fazle Abed and networked by billion poorest village women offers clues to sustainability from the ground up like no white ruler has ever felt or morally audited. by London Scot James Wilson. Could Queen Victoria change empire fro slavemaking to commonwealth? Some say Victoria liked the challenge James set her, others that she gave him a poison pill assignment. Thus James arrived in Calcutta 1860 with the Queens permission to charter a bank by and for Indian people. Within 9 months he died of diarrhea. 75 years later Calcutta was where the Young Fazle Abed grew up - his family accounted for some of the biggest traders. Only to be partitioned back at age 11 to his family's home region in the far north east of what had been British Raj India but was now to be ruled by Pakistan for 25 years. Age 18 Abed made the trek to Glasgow University to study naval engineering.

new york

1943 marked centenary autobio of The Economist and my teenage dad Norman prepping to be navigator allied bomber command Burma Campaign -thanks to US dad survived, finished in last class of Keynes. before starting 5 decades at The Economist; after 15 years he was allowed to sign one survey a year starting in 1962 with the scoop that Japan (Korea S, Taiwan soon hk singapore) had found development mp0de;s for all Asian to rise. Rural Keynes could end village poverty & starvation; supercity win-win trades could celebrate Neumanns gift of 100 times more tech per decade (see macrae bio of von neumann)

Since 1960 the legacy of von neumann means ever decade multiplies 100 times more micro-technology- an unprecedented time for better or worse of all earthdwellers; 2025 timelined and mapped innovation exponentials - education, health, go green etc - (opportunities threats) to celebrating sustainability generation by 2025; dad parted from earth 2010; since then 2 journals by adam smith scholars out of Glasgow where engines began in 1760- Social Business; New Economics have invited academic worlds and young graduates to question where the human race is going - after 30 business trips to wealthier parts of Asia, through 2010s I have mainly sherpa's young journalist to Bangladesh - we are filing 50 years of cases on women empowerment at these web sites AbedMOOC.com FazleAbed.com EconomistPoor.com EconomistUN.com WorldRecordjobs.com Economistwomen.com Economistyouth.com EconomistDiary.com UNsummitfuture.com - in my view how a billion asian women linked together to end extreme poverty across continental asia is the greatest and happiest miracle anyone can take notes on - please note the rest of this column does not reflect my current maps of how or where the younger half of the world need to linkin to be the first sdg generation......its more like an old scrap book

 how do humans design futures?-in the 2020s decade of the sdgs – this question has never had more urgency. to be or not to be/ – ref to lessons of deming or keynes, or glasgow university alumni smith and 200 years of hi-trust economics mapmaking later fazle abed - we now know how-a man made system is defined by one goal uniting generations- a system multiplies connected peoples work and demands either accelerating progress to its goal or collapsing - sir fazle abed died dec 2020 - so who are his most active scholars climate adaptability where cop26 november will be a great chance to renuite with 260 years of adam smith and james watts purposes t end poverty-specifically we interpret sdg 1 as meaning next girl or boy born has fair chance at free happy an productive life as we seek to make any community a child is born into a thriving space to grow up between discover of new worlds in 1500 and 1945 systems got worse and worse on the goal eg processes like slavery emerged- and ultimately the world was designed around a handful of big empires and often only the most powerful men in those empires. 4 amazing human-tech systems were invented to start massive use by 1960 borlaug agriculture and related solutions every poorest village (2/3people still had no access to electricity) could action learn person to person- deming engineering whose goal was zero defects by helping workers humanize machines- this could even allowed thousands of small suppliers to be best at one part in machines assembled from all those parts) – although americans invented these solution asia most needed them and joyfully became world class at them- up to 2 billion people were helped to end poverty through sharing this knowhow- unlike consuming up things actionable knowhow multiplies value in use when it links through every community that needs it the other two technologies space and media and satellite telecoms, and digital analytic power looked promising- by 1965 alumni of moore promised to multiply 100 fold efficiency of these core tech each decade to 2030- that would be a trillion tmes moore than was needed to land on the moon in 1960s. you might think this tech could improve race to end poverty- and initially it did but by 1990 it was designed around the long term goal of making 10 men richer than 40% poorest- these men also got involved in complex vested interests so that the vast majority of politicians in brussels and dc backed the big get bigger - often they used fake media to hide what they were doing to climate and other stuff that a world trebling in population size d\ - we the 3 generations children parents grandparents have until 2030 to design new system orbits gravitated around goal 1 and navigating the un's other 17 goals do you want to help/ 8 cities we spend most time helping students exchange sustainability solutions 2018-2019 BR0 Beijing Hangzhou: 

Girls world maps begin at B01 good news reporting with fazleabed.com  valuetrue.com and womenuni.com


online library of norman macrae--


MA1 AliBaba TaoBao

Ma 2 Ali Financial

Ma10.1 DT and ODPS

health catalogue; energy catalogue

Keynes: 2025now - jobs Creating Gen


how poorest women in world build

A01 BRAC health system,

A02 BRAC education system,

A03 BRAC banking system

K01 Twin Health System - Haiti& Boston

Past events EconomistDiary.com

include 15th annual spring collaboration cafe new york - 2022 was withsister city hong kong designers of metaverse for beeings.app

© 2024   Created by chris macrae.   Powered by

Report an Issue  |  Terms of Service