260SmithWatt 70Neumann 50F.Abed , AI20s.com Fei-Fei Li, Zbee

HumansAI.com NormanMacrae.net AIGames.solar EconomistDiary.com Abedmooc.com

AI: In gamifying verygoodai, players spanning 73 years of the Economist-Neumann Diaries see humanity1.0 ending with whether UN accomplishes sdgs promised in 2015 for 2030.: We'd accept as millennials hope for brodge to renewable humanity2.0: intelliugence shared wherever famlies live so goals compounding in the right direction unlike UN 2023 halfway report where all were spiraling backwards from 2015; we know that enough tech is available but the courage humans would communally need is in most cases not being empowered by those choosing what to fund.

To counter the above mess, each new learning year 7654321 aims to feature a world series of good intellgence- here's a corker to start 23-24 with some additional commentary at linkedin; please tell us if you see more worthy of these series or our daily diary Unsummitfuture.com

Uniting worlds I see - Intelligence Countdown 7

Transcript of AI goddad &goddaughter Canada Fall 2023 - first 38 minutes

0:52
welcome you all to this discussion between Geoffrey Hinton University professor emeritus at the University of
Toronto known to many as the Godfather of deep learning and Fei-Fei Li <a href="https://money.cnn.com/2016/07/21/news/economy/chinese-immigrant-stanford-professor/">1</a> the inaugural sequola professor in computer
science at Stanford University where she is co-director of human centered AI Institute

thanks radical Ventures and the other event partners for joining with UFT ..
1:40 Professor Hinton and his students pionerred UFT tradition of Excellence graduates together with Partners at the vector Institute and at universities around the world are advanced machine learning; later this fall faculty, students and partners will begin moving into phase one of the beautiful new Schwarz reesman Innovation campus just across the street you may have noticed a rather striking building
by creating Canada's largest university- based Innovation Hub made possible by a generous and Visionary gift from Heather reesman and Jerry Schwarz The Innovation campus will be a focal point for AI thought leadership hosting both the Schwarz reesman institute for technology and Society led by Professor jillan Hadfield and the vector Institute it's already clear that artificial intelligence and machine learning are driving Innovation and value creation across the economy they are all also transforming research in fields like drug Discovery
medical Diagnostics and the search for Advanced Materials of course at the same time
there are growing concerns over the role that AI will play in shaping Humanity's future so today's conversation clearly addresses without further Ado let me now introduce today's moderator Jordan Jacobs
3:30
Jordan is managing partner and co-founder of radical Ventures a leading Venture Capital firm supporting AI based Ventures here in Toronto and around the world earlier he co-founded layer 6 Ai and served as co-ceo prior to its acquisition by TD Bank Group which he joined as Chief AI officer. Jordan serves as a director of the Canadian Institute for advanced research and he was among the founders of the vector Institute a concept that he dreamed up with Tommy Putin and Jeff Hinton, Ed Clark and a few others

4:14
Jacobs come on up uh thanks very much Merrick this is the first in our uh annual four-part
series of AI founder master classes that we run at radical this is the third
year we've done it and today's the first one of this year we do it in person and online so we've got uh thousands of people watching this online
4:55
Radical Ventures works in partnership with the vector Institute and machine intelligence Institute in Alberta and with Stanford Jeff is often called The Godfather of artificial intelligence. he's won the turing award he is a professor amus University of Toronto,co-founder of the vector Institute also mentored in a lot of the people who have gone on to be leaders in AI globally including at the big companies and uh many of the top research labs in the world in Academia


Fei-FeiFay is the founding director of the Stanford HAI Institute for human centered AI, member of the National Academy of Engineering in the US the National Academy of Medicine and the American Academy of Arts and Science; during 2017-8 she took up role as a vice president at Google as Chief scientist of AIML at Google Cloud; there's many many other things we could say about Fei-fei but she also has an amazing number of students who have gone on to be leaders in the field globally and uh really importantly and she has a book coming out in a couple of weeks . it's called: the world's I see -curiosity exploration and Discovery
at the dawn of AI . Jeff wrote on back cover of book:Fei-Fei Lee was the first computer vision researcher to truly understand the power of big data and a work opened the floodgates for deep learning she delivers an urgent clear eyed account of the awesome potential and danger of the AI technology that she helped to unleash and her call for action and Collective responsibility is desperately needed at this pivotal moment in history

7:58 Fei-Fei and Jeff, can we go bac to the Big Bang moment Alex net winning image net compertition 2012; maybe Jeff do you want to take us through from your perspective that moment which is uh 11 years ago now

okay so in 2012 two of my very smart graduate students um won a competition a public competition and showed that deep neural networks could do much better than the existing technology now this wouldn't have been possible without a big data set that you could train them on up to that point there hadn't been a big data
set of labeled images and faay was responsible for that data set and I'd like to start by asking fei-feim whether there were any problems in putting together that data set?

fei-fei: thank you Jeff and Jordan it's really fun to be here so yes the data set that Jeff you're mentioning is called imagenet and I began building it um 2007 and spent the the next three years pretty much with my
graduate students building it and you asked me was there a problem building it
9:07
where do I even begin um even at the conception of this project I was told that it it really was a bad idea I was a young assistant professor my first year as assistant professor at Princeton and for example a very very
respected mentor of mine in the field uh if you know the academic jargon these are the people who will be writing my tenure evaluations ;actually told me really out of their good heart that please don't do this
..you might have trouble getting tenure if you do this; (Jeff yu are takking about Jitebdra malik - ffl mmm)

then I also tried to invite other collaborators and nobody in machine learning or AI wanted to even go close to this project and of course
no funding

interviewer ok could you just describe imagenet for people who are not familiar with what it was

fei-fei li- so imagenet was conceived around 2007, and the the reason I conceived ir was actually two fold:

one is I was trained as a scientist; to me doing science is chasing after northstars and in the field of AI
especially visual intelligence, for me object recognition the the ability for computers to recognize there's a table in the picture or there's a chair is called object recognition has to be a
North star problem in our field and I feel that we need to really put a dent in this problem so I want to Define that Northstar problem that that was one aspect of image net

second, image net was recognizing that machine learning was really going in circles a little bit at that time that we were making really intricate models without the kind of data to drive the machine learning. of course in our jargon it's really the generalization problem and I recognizef that we really need to hit a reset and rethink about machine learning from a datadriven point of you so I wanted to go crazy and make a data set that no one has ever seen in terms of its quantity and diversity and and and everything so image net after 3
years was a curated data set of internet images ; that's totaled 15 million images across 22,000 Concepts object Concepts and that was the the data set

jeff: just for comparison at the same time in Toronto we were making a data set called CA 10 that had 10 different classes and 60,000 images and it was a lot of work was generously paid for by cfar at 5
cents an image a

intervewer: so you turn the data set into a competition just walk us through what that meant and then we'll we'll kind of fast forward to 2012

fei-fei - so we made the data set in 2009 ;we barely made it into a poster in a in a academic conference and no one paid attention so I was a little desperate at that time time and I believe this is the
way to go; and we open sourced it but even with open source it wasn't really picking up so my students and I thought well let's get a little more drive off the competition let's create a competition to invite the worldwide research Community to participate in this problem of object recognition through images
13:27
so we made a imagenet competition and the first feedback we got from our friends and colleagues is it's too big eg you can not fit it into a hard drive let alone memmory so we actually created a smaller data set
called the challenge data set which is only 1 million images across 1,000 categories instead of 22,000 images and that would was Unleashed in 2010. I think you guys noticed it in 2011

2012 Alex Krizhevsky with Ilya  Sustever
jeff right yes and so in my lab we already had deep neural networks working quite well for speech recognition and Ilya said we really ought to be able to win the imagenet competition we and he tried to convince me that we should do that and I said well you know it's an awful lot of data and he aslo tried to convince
his friend Alex kevki and Alex wasn't really interested

so Iylya actually pre-processed all the data to put it in just the form Alex need needed -we Shrunk the size of the imagenet Alex and then Alex eventually agreed to do it ; meanwhile in yan lecun lab in New York yam was desperately trying to get his students and posts to work on this data center because he
said the first person to apply convolutional Nets to this set was going to win and none of his students were
interested they were all busy doing other things and so Alex and Ilia got on with it
15:10
and we discovered by running on the previous year's 2011 competition that we were doing much better than the other techniques and so we knew we were going to win the 2012 competition and then there was this political problem which is um we thought if we show the neural network win this competition the computer vision people
particular will say well that just shows it's not a very good data set
15:40
so we had to get them to agree ahead of time that if we won the competition we'd have proved that neural networks worked;so I actually called up jitendra and we talked about data sets we might run on and um my objective was to get to jitendra to agree that if we could do imagenet then neuron Nets really work worked and after some discussion and him telling me to do other data sets we eventually agreed okay if we could do imsagenet we'd have shown neural Net's work ; jendra remembers it as he suggested imagenet and he was the one who told us to do it but it was actually a bit the other way around uand we did it and it was amazing we got just over half the error rate of the standard techniques and the standard techniques have been tuned for many years by very good researchers

Fei-Fei I remember getting a phone late
one evening from my students who was running the prep' it was beginning of October that year that computer vision Fields uh International uh conference iccv 2012 was happening Florence Italy we already booked a workshop annual workshop at the conference we will be an announcing the winner it's the third year so a couple of weeks before we have to process the and frankly the previous two years results didn't excite me and I was a nursing mother at that time so I decided not to go to the third year so I didn't book
any tickets I'm just like too far for me and then the results came in that evening phone callr came in and I remember saying to myself darn it Jeff now I have to get a ticket to Italy because I knew that was a very significant moment especially it was on
convolution on network which I learned as a graduate student as a classic as a machine learning researcher I knew history was in the making yet imagenet was being attacked it was
just a very strange it was exciting moment
intervewer:
let's just go back for a little bit both of you have had to persevere through the moments that you just described but kind of throughout your careers can you just go back Jeff maybe in start give us a background to why did you want to get into AI in the
21:18
jeff first place I did psychology as an undergraduate; I didn't do very well at
it and I decided they were never going to figure out out how the mind worked
unless they figured out how the brain worked and so I want to figure out how the brain worked and I want to have an actual model that worked so you can think of understanding the brain as building a bridge there's experimental data and things you can learn from experimental data and there's things that will do the computations you want things that will recognize objects and they were very different and I think of
21:55
it as you want to build this bridge between between the data and the competence the ability to do the task and I always saw myself as starting at the end of things that work but trying to make them more and more like the brain but still work other people tried to um stay with things justified by empirical data and try and have theories that might work um but we're trying to build that bridge and not many people were trying to build the Bridge Terry sinowski was trying to build a bridge from the other end and so we got along very well um a lot of people doing trying to do computer vision just wanted something that worked they didn't care about the brain and a lot of people who care about the brain wanted to undersand how neurons work and so on but didn't want to think much about the nature of the computations and I still see it as we have to build this bridge by getting people who know about the data and people know about what works to connect
so my aim was always is to make things that could do Vision but do Vision in the way that people do i

interviewer okay so we're going to come back to that because I want to ask you about the most recent developments and how you think that they relate to the brain FAA you an so Jeff just to kind of put a framework on where you started UK to the US to Canada by mid to late 80 you come to the Canada in 87 along that route funding and interest in neural Nets and the way the approaches that you're taking kind of goes like this but I would say mostly

23:37 fei-fei
you started your life in a very different place can you walk us through a little bit of how you came to AI
23:44
dei-fei so I started my life uh in China (where my parents were chemistry and physics teachers) ; theb when I was 15 year old my parents
and I came to paesippany New Jersey so I became a new immigrant and where I
started was first English because I didn't speak that language; and just working in laundries and restaurants and so on but at high school I had a passion for physics I don't know how it got into my head but I wanted to go to Princeton because all I know was Einstein was there

Well-he wasn't there by the time I got in! I'm not that old ; but there was a statue of him and the one thing I learned in physics beyond all the math and all that is really the audacity to ask the craziest questions like the smallest you know particles of the world or the boundary of space time and beginning of universe and along the way I discover brain as a third year course based on Roger Penrose books
25:07
heff probably better you didn't tahe jim roo seriously
25:20
fei-fei yeah but i got excited by the brain and by the time I was graduating I wanted to ask the most audacious question as a scientist and to me the absolute Most Fascinating audacious question of my generation that was Intelligence? so I went to caltech to get a dual PhD in Neuroscience with Kristoff and in AI with Petro perona so I
so Echo Jeff what you said about Bridge because because that five years allow me
to work on computational neuroscience and look at how the mind works as well
as to work on the computational side and and try to build that computer
program that can mimic the human brain so that's that's my journey it starts
from physics

interviewer: okay so your Journeys intersect at immagenet 2012 by the way I met
Jeff when I was a graduate student right I remember I used to go visit Petro's lab yeah in fact he actually offered me a job at celtech when I was 70 you would have been my adviser no I would not not when I was 70 okay so we intersect it at imagenet

for those in the field everyone knows that imagenet is this bing bang moment
and subsequent to that first the big tech companies come in and basically start buying up your students and you get them into the companies I think they they were the first ones to realize the potential of this I'd like to talk about that for a moment but kind of fast forwarding I think it's only now since chat GPT that the rest of the world is catching up to the power of AI because finally you can play with it ..you can experience it you know in the boardroom they can talk about it and then go home and then you know their the 10-year-old kid has just written a dinosaur essay for fifth grade with chat
27:22
so that kind of transcendent experience of everyone being able to play with it I think has been a huge shift but in the period in between which is 10 years there is uh kind of this explosive growth of AI inside the big tech companies and everyone else is not really noticing what's going on can can you just talk us through your own experience uh because you experienced a kind of a ground zero
27:48
it's difficult for us to get into the frame of everybody else not realizing what was going on because we realized what was going on so a lot of the universities you'd have thought would be right at the Forefront were very slow in picking up on it so MIT for example and Berkeley I remember going even to talk in Berkeley in I think 2013 um when already AI was being very successful in computer vision and afterwards a graduate student came up to me and he said I've been here like four years and this is the first talk I've heard about neural networks they're really interesting
28:29
yws while Stanford got excited abour NN MIT did not; they they were rigidly
against having neural Nets and the imagenet moment started to wear them down and
now they're big proponents of neuronet but but it's hard to imagine now but um
around 2010 or 2011 there was the computer vision people very good computer vision people were really adamantly against neural Nets they were so against it that for example one of the main journals had a policy not to referee papers on neural Nets at one point just send him back.
29:13
don't referee them it's a waste of time eg yann sent a paper to a conference where he had a neural net that was better at identifying at doing segmentation of pedestrians than the state-ofthe-art and it was rejected and it was one of the reasons it was rejected was one of the referees said this tells us nothing about Vision because they had this view
of how computer Vision Works which is you study the nature of the problem of vision you formulate an algorithm;that'll solve it you figure out how to implement that algorithm and then you publish a paper and the fact it doesn't work I have to defend my field not everybody not everybody so there are people who are but most of them were
adamantly against neural Nets and then something remarkable happened after the image net competition which is they all changed within about a year ; all the people who been the biggest critics of neural Nets started doing neural Nets much to our sharinr and some of them did it better than us so zissman in Oxford for example made a better neural
net very quickly um but they they behaved like scientists ought to behave
which is they had this strong belief this stuff was rubbish because of imagenet we could eventually show that it wasn't and then they changed so that was that was very comforting and just to carry it forward so the what you're trying to show is you're trying to label using the neural Nets these 15 million images accurately you've got them all labeled in the background so you can measure it the error rate when you did it dropped from 26% the year
30:49
before I think to 15.3 % and then it subsequently keeps improving6

31:03
and in subsequent years people are using more powerful neural Nets and it continues to drop to the point where it passes it surpasses 2015 so there is a Canadian very smart
Canadian underground who joined my lab his name is Andre and he got
bored One Summer and said I want to measure how huge humans do so you should go read his blog so he hosted human doing image net h parties he had to bribe them with pizza
31:39
I think my students in the lab and they got to
accuracy about 3.5% I think so humans basically make mistakes about 3% of the time right and then I think 20 2016 I
31:57
think a resonant passed it right it was resonet um is that Year's winning
algorithm passed the human performance and then ultimately you had to retire the competition because it was so much better than humans that it -and we had to retire because we run out of funding
32:15
incidentally that student started life at the University of Toronto before he went to your lab and then he went to be head of search at Tesla okay first of all he
came to Stanford to be a PhD student and yesterday night we were talking actually
there was a breakthrough dissertation you know in the middle of this and then he became part of the founding team of open AI but then he went to Tesla and then he went to Tesla and then he had then he thought better of he back but I do want to answer your question of that 10 years it well
32:56
there's a couple of developments along the way Transformers right so the Transformer paper is written the research done in WR paper written inside Google
how does transformer approach innovate whats done with ai
BARD SAYS:
The transformer approach innovates on what has been done with AI in a number of ways. First, it is able to process long sequences of data efficiently and effectively. This is because transformers use a self-attention mechanism, which allows them to learn long-range dependencies in data.

Second, transformers are able to learn complex relationships between different parts of a sequence. This is because transformers use a parallel architecture, which allows them to process all parts of a sequence simultaneously.

Third, transformers are able to generalize to new data well. This is because transformers are trained on large datasets of unlabeled data.

These innovations have allowed transformers to achieve state-of-the-art results on a wide range of natural language processing tasks, including machine translation, text summarization, and question answering.

Transformers have also been used to develop new AI applications, such as large language models like me. Large language models are able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.

Overall, the transformer approach has revolutionized the field of AI. It has enabled the development of new algorithms and applications that are more powerful and versatile than anything that came before.

Here are some specific examples of how the transformer approach has innovated on what has been done with AI:

Machine translation: Transformers have achieved state-of-the-art results on a wide range of machine translation tasks. For example, the Google Translate system now uses transformers to translate between over 100 languages.
Text summarization: Transformers have also achieved state-of-the-art results on text summarization tasks. For example, the BART model is able to generate summaries of text that are more accurate and informative than summaries generated by previous models.
Question answering: Transformers have also achieved state-of-the-art results on question answering tasks. For example, the T5 model is able to answer questions about text in a comprehensive and informative way.
Large language models: Transformers have also been used to develop large language models, such as me. Large language models are able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
The transformer approach has also been used to develop new AI applications in other domains, such as computer vision and natural language processing.

Overall, the transformer approach has had a major impact on the field of AI. It has enabled the development of new algorithms and applications that are more powerful and versatile than anything that came before.
The transformer breakthrough was published by a group of eight researchers at Google in 2017. The paper, titled "Attention Is All You Need", was published in the journal Nature.

The authors of the paper are:

Vaswani, Ashish
Shazeer, Noam
Parmar, Niki
Joulin, Jakob
Uszkoreit, Jakob
Llion Jones
Danihelka, Ilya
Guez, Aidan
Chorowski, Jan
This paper is considered to be one of the most important papers in the field of AI in recent years. It has led to the development of new algorithms and applications that are more powerful and versatile than anything that came before.

The transformer architecture has been used to achieve state-of-the-art results on a wide range of natural language processing tasks, including machine translation, text summarization, and question answering. It has also been used to develop new AI applications, such as large language models like me.

The transformer breakthrough is a testament to the power of collaboration and open research. The authors of the paper were able to achieve this breakthrough by building on the work of many other researchers, and by sharing their work with the world.
<br>
another Canadian is a co-author there Aiden Gomez Who's now the CEO and
founder of and co-founder of coher um who I think was a 20-year-old intern
at Google brain when co-authored the paper so there's a tradition of Canadians being involved in these breakthroughs but Jee you were at Google uh when the paper was written was there an awareness inside Google of how important this would be I don't think
33:33
fei-fei there was maybe the authors knew but it took me several years to realize how important it was and at Google people didn't realize how important it was until Bert so Bert used Transformers and Bert then became a lot better at a lot
of natural language posting benchmarks for a lot of different tasks and that's
when people realized Transformers were special <br>
BARD SAYS<br>
BERT, or Bidirectional Encoder Representations from Transformers, is a language model developed by Google AI. It is one of the first language models to use the transformer architecture, and it has achieved state-of-the-art results on a wide range of natural language processing tasks.

BERT is a bidirectional language model, which means that it can learn the context of words in a sentence from both the left and the right. This is in contrast to previous language models, which were only able to learn the context of words from the left.

BERT is trained on a massive dataset of text and code, which allows it to learn the relationships between words and phrases. Once BERT is trained, it can be fine-tuned for specific natural language processing tasks, such as machine translation, text summarization, and question answering.

BERT is a powerful tool for natural language processing, and it has been used to develop a wide range of new applications. For example, BERT is used in the Google Search engine to improve the quality of search results. It is also used in the Google Assistant to improve the accuracy of natural language processing tasks, such as speech recognition and text generation.

Here are some specific examples of how BERT has been used to achieve state-of-the-art results on natural language processing tasks:

Machine translation: BERT has been used to develop machine translation systems that are more accurate and fluent than previous systems.
Text summarization: BERT has been used to develop text summarization systems that are able to generate more informative and concise summaries of text.
Question answering: BERT has been used to develop question answering systems that are able to answer questions about text in a more comprehensive and informative way.
BERT is a powerful tool for natural language processing, and it has been used to develop a wide range of new applications. It is a great example of how the transformer architecture can be used to improve the performance of natural language processing systems.
<br>20 17 FEi_FEI : SO, the Transformer paper
was published I also joined Google and I think you and I actually met on my first
week I think most of 2017 and 2018 was neuroarchitecture search right I think
that was Google's bet Y and there was a lot of gpus being used so it was a different bet so just to explain that neural architecture search essentially means this you get
yourself a whole lot of gpus and you just try out lots of different architectures to see which works best and you automate that it's basically automated Evolution for neuronet architectures it's like hyperparameter and it it led to some good quite big improvements but nothing like Transformers and Transformers were a huge Improvement for natural language<br> neuron architecture search was mostly do image that

fei-fei li I do think there's something important in the world overlooked this 10 years between imagenet alexnet and Chad GPT most of the world sees this as a tech tech 10 years
you know or or we see it as a tech 10 years in the big Tech there's things
Brewing I mean it took sequence to sequence Transformer but things are
Brewing but I do think for me personally
and for the world it's also a transformation between Tech to society I
actually think personally I Grew From a scientist to a humanist in this 10 years
because having joined Google for that two years in the middle of the Transformer paper
I begin to see the societal implication of this technology it was
post Alpha go moment and very quickly we got to the alpha fold moment it was
where bias it was creeping out there was privacy issues and then we're starting
to see the beginning of disinformation and misinformation and then we're starting
to see the talk Bo of job within a small circle not within in a big public
discourse it was when I grew personally anxious I feel you know
it was also right after Cambridge analytica so that huge
38:16
implication of Technology not AI per se but it's algorithm driven technology on
election that's when I had to make a personal decision of staying at Google or come back to Stanford and I knew the only reason I would come back to Stanford was starting this human Center AI Institute to really really understand the human side of this
technology so I think this is a very important 10 years even though the it's
kind of not in the eyes of the public but this technology is starting to
really creep into the rest of our lives and and of course 2022 it's all shown
in the under the daylight how profound this is

Views: 21

Reply to This

ENTREPRENEURIAL REVOLUTION NETWORK BENCHMARKS 2025now : Remembering Norman Macrae

unaiwho.docx version 6/6/22 hunt for 100 helping guterres most with UN2.0

EconomistDiary.com Friends20.com & EntrepreneurialRevolution.city select 2022's greatest moments for citizens/youth of NY & HK & Utellus

Prep for UN Sept 22 summit education no longer fit for human beings/sustainability

JOIN SEARCH FOR UNDER 30s MOST MASSIVE COLLABS FOR HUMAN SUSTAINABILITY - 3/21/22 HAPPY 50th Birthday TO WORLD'S MOST SUSTAINABLE ECONOMY- ASIAN WOMEN SUPERVILLAGE

Since gaining my MA statistics Cambridge DAMTP 1973 (Corpus Christi College) my special sibject has been community building networks- these are the 6 most exciting collaboration opportunities my life has been privileged to map - the first two evolved as grassroots person to person networks before 1996 in tropical Asian places where village women had no access to electricity grids nor phones- then came mobile and solar entrepreneurial revolutions!! 

COLLAB platforms of livesmatter communities to mediate public and private -poorest village mothers empowering end of poverty    5.1 5.2 5.3 5.4 5.5  5.6


4 livelihood edu for all 

4.1  4.2  4.3  4.4  4.5 4.6


3 last mile health services  3.1 3,2  3.3  3.4   3.5   3.6


last mile nutrition  2.1   2.2   2.3   2.4  2.5  2,6


banking for all workers  1.1  1.2  1.3   1.4   1.5   1.6


NEWS FROM LIBRARY NORMAN MACRAE -latest publication 2021 translation into japanese biography of von neumann:

Below: neat German catalogue (about half of dad's signed works) but expensive  -interesting to see how Germans selected the parts  they like over time: eg omitted 1962 Consider Japan The Economist 

feel free to ask if free versions are available 

The coming entrepreneurial revolution : a survey Macrae, Norman - In: The economist 261 (1976), pp. 41-65 cited 105 

Macrae, Norman - In: IPA review / Institute of PublicAffairs 25 (1971) 3, pp. 67-72  
 Macrae, Norman - The Economist 257 (1975), pp. 1-44 
6 The future of international business Macrae, Norman - In: Transnational corporations and world order : readings …, (pp. 373-385). 1979 >
Future U.S. growth and leadership assessed from abroad Macrae, Norman - In: Prospects for growth : changing expectations for the future, (pp. 127-140). 1977 Check Google Scholar | 
9Entrepreneurial Revolution - next capitalism: in hi-tech left=right=center; The Economist 1976
Macrae, Norman -In: European community (1978), pp. 3-6
  Macrae, Norman - In: Kapitalismus heute, (pp. 191-204). 1974
23a 

. we scots are less than 4/1000 of the worlds and 3/4 are Diaspora - immigrants in others countries. Since 2008 I have been celebrating Bangladesh Women Empowerment solutions wth NY graduates. Now I want to host love each others events in new york starting this week with hong kong-contact me if we can celebrate anoither countries winm-wins with new yorkers

mapping OTHER ECONOMIES:

50 SMALLEST ISLAND NATIONS

TWO Macroeconomies FROM SIXTH OF PEOPLE WHO ARE WHITE & war-prone

ADemocratic

Russian

=============

From 60%+ people =Asian Supercity (60TH YEAR OF ECONOMIST REPORTING - SEE CONSIDER JAPAN1962)

Far South - eg African, Latin Am, Australasia

Earth's other economies : Arctic, Antarctic, Dessert, Rainforest

===========

In addition to how the 5 primary sdgs1-5 are gravitated we see 6 transformation factors as most critical to sustainability of 2020-2025-2030

Xfactors to 2030 Xclimate XAI Xinfra Xyouth Wwomen Xpoor chris.macrae@yahoo.co.uk (scot currently  in washington DC)- in 1984 i co-authored 2025 report with dad norman.

Asia Rising Surveys

Entrepreneurial Revolution -would endgame of one 40-year generations of applying Industrial Revolution 3,4 lead to sustainability of extinction

1972's Next 40 Years ;1976's Coming Entrepreneurial Revolution; 12 week leaders debate 1982's We're All Intrapreneurial Now

The Economist had been founded   in 1843" marking one of 6 exponential timeframes "Future Histores"

IN ASSOCIATION WITH ADAMSMITH.app :

we offer worldwide mapping view points from

1 2 now to 2025-30

and these viewpoints:

40 years ago -early 1980s when we first framed 2025 report;

from 1960s when 100 times more tech per decade was due to compound industrial revolutions 3,4 

1945 birth of UN

1843 when the economist was founded

1760s - adam smithian 2 views : last of pre-engineering era; first 16 years of engineering ra including america's declaration of independence- in essence this meant that to 1914 continental scaling of engineeriing would be separate new world <.old world

conomistwomen.com

IF we 8 billion earthlings of the 2020s are to celebrate collaboration escapes from extinction, the knowhow of the billion asian poorest women networks will be invaluable -

in mathematically connected ways so will the stories of diaspora scots and the greatest mathematicians ever home schooled -central european jewish teens who emigrated eg Neumann , Einstein ... to USA 2nd quarter of the 20th century; it is on such diversity that entrepreneurial revolution diaries have been shaped 

EconomistPOOR.com : Dad was born in the USSR in 1923 - his dad served in British Embassies. Dad's curiosity enjoyed the opposite of a standard examined education. From 11+ Norman observed results of domination of humans by mad white men - Stalin from being in British Embassy in Moscow to 1936; Hitler in Embassy of last Adriatic port used by Jews to escape Hitler. Then dad spent his last days as a teen in allied bomber command navigating airplanes stationed at modernday Myanmar. Surviving thanks to the Americas dad was in Keynes last class where he was taught that only a handful of system designers control what futures are possible. EconomistScotland.com AbedMooc.com

To help mediate such, question every world eventwith optimistic rationalism, my father's 2000 articles at The Economist interpret all sorts of future spins. After his 15th year he was permitted one signed survey a year. In the mid 1950s he had met John Von Neumann whom he become biographer to , and was the only journalist at Messina's's birth of EU. == If you only have time for one download this one page tour of COLLABorations composed by Fazle Abed and networked by billion poorest village women offers clues to sustainability from the ground up like no white ruler has ever felt or morally audited. by London Scot James Wilson. Could Queen Victoria change empire fro slavemaking to commonwealth? Some say Victoria liked the challenge James set her, others that she gave him a poison pill assignment. Thus James arrived in Calcutta 1860 with the Queens permission to charter a bank by and for Indian people. Within 9 months he died of diarrhea. 75 years later Calcutta was where the Young Fazle Abed grew up - his family accounted for some of the biggest traders. Only to be partitioned back at age 11 to his family's home region in the far north east of what had been British Raj India but was now to be ruled by Pakistan for 25 years. Age 18 Abed made the trek to Glasgow University to study naval engineering.

new york

1943 marked centenary autobio of The Economist and my teenage dad Norman prepping to be navigator allied bomber command Burma Campaign -thanks to US dad survived, finished in last class of Keynes. before starting 5 decades at The Economist; after 15 years he was allowed to sign one survey a year starting in 1962 with the scoop that Japan (Korea S, Taiwan soon hk singapore) had found development mp0de;s for all Asian to rise. Rural Keynes could end village poverty & starvation; supercity win-win trades could celebrate Neumanns gift of 100 times more tech per decade (see macrae bio of von neumann)

Since 1960 the legacy of von neumann means ever decade multiplies 100 times more micro-technology- an unprecedented time for better or worse of all earthdwellers; 2025 timelined and mapped innovation exponentials - education, health, go green etc - (opportunities threats) to celebrating sustainability generation by 2025; dad parted from earth 2010; since then 2 journals by adam smith scholars out of Glasgow where engines began in 1760- Social Business; New Economics have invited academic worlds and young graduates to question where the human race is going - after 30 business trips to wealthier parts of Asia, through 2010s I have mainly sherpa's young journalist to Bangladesh - we are filing 50 years of cases on women empowerment at these web sites AbedMOOC.com FazleAbed.com EconomistPoor.com EconomistUN.com WorldRecordjobs.com Economistwomen.com Economistyouth.com EconomistDiary.com UNsummitfuture.com - in my view how a billion asian women linked together to end extreme poverty across continental asia is the greatest and happiest miracle anyone can take notes on - please note the rest of this column does not reflect my current maps of how or where the younger half of the world need to linkin to be the first sdg generation......its more like an old scrap book

 how do humans design futures?-in the 2020s decade of the sdgs – this question has never had more urgency. to be or not to be/ – ref to lessons of deming or keynes, or glasgow university alumni smith and 200 years of hi-trust economics mapmaking later fazle abed - we now know how-a man made system is defined by one goal uniting generations- a system multiplies connected peoples work and demands either accelerating progress to its goal or collapsing - sir fazle abed died dec 2020 - so who are his most active scholars climate adaptability where cop26 november will be a great chance to renuite with 260 years of adam smith and james watts purposes t end poverty-specifically we interpret sdg 1 as meaning next girl or boy born has fair chance at free happy an productive life as we seek to make any community a child is born into a thriving space to grow up between discover of new worlds in 1500 and 1945 systems got worse and worse on the goal eg processes like slavery emerged- and ultimately the world was designed around a handful of big empires and often only the most powerful men in those empires. 4 amazing human-tech systems were invented to start massive use by 1960 borlaug agriculture and related solutions every poorest village (2/3people still had no access to electricity) could action learn person to person- deming engineering whose goal was zero defects by helping workers humanize machines- this could even allowed thousands of small suppliers to be best at one part in machines assembled from all those parts) – although americans invented these solution asia most needed them and joyfully became world class at them- up to 2 billion people were helped to end poverty through sharing this knowhow- unlike consuming up things actionable knowhow multiplies value in use when it links through every community that needs it the other two technologies space and media and satellite telecoms, and digital analytic power looked promising- by 1965 alumni of moore promised to multiply 100 fold efficiency of these core tech each decade to 2030- that would be a trillion tmes moore than was needed to land on the moon in 1960s. you might think this tech could improve race to end poverty- and initially it did but by 1990 it was designed around the long term goal of making 10 men richer than 40% poorest- these men also got involved in complex vested interests so that the vast majority of politicians in brussels and dc backed the big get bigger - often they used fake media to hide what they were doing to climate and other stuff that a world trebling in population size d\ - we the 3 generations children parents grandparents have until 2030 to design new system orbits gravitated around goal 1 and navigating the un's other 17 goals do you want to help/ 8 cities we spend most time helping students exchange sustainability solutions 2018-2019 BR0 Beijing Hangzhou: 

Girls world maps begin at B01 good news reporting with fazleabed.com  valuetrue.com and womenuni.com

.==========

online library of norman macrae--

==========

MA1 AliBaba TaoBao

Ma 2 Ali Financial

Ma10.1 DT and ODPS

health catalogue; energy catalogue

Keynes: 2025now - jobs Creating Gen

.

how poorest women in world build

A01 BRAC health system,

A02 BRAC education system,

A03 BRAC banking system

K01 Twin Health System - Haiti& Boston

Past events EconomistDiary.com

include 15th annual spring collaboration cafe new york - 2022 was withsister city hong kong designers of metaverse for beeings.app

© 2024   Created by chris macrae.   Powered by

Report an Issue  |  Terms of Service