Quantcast
Channel: Jim Spohrer’s Blog – Service Science
Viewing all articles
Browse latest Browse all 10

Reflecting on generative AI

$
0
0

My colleague (Salvatore Moccia) at EIT Digital invited me to give the opening lecture in their Generative AI online class. Marco Podien will speak right after I do with some great insights and examples of using ChatGPT.

In my presentation (which is posted to slideshare), my goal is to inspire the participants to dive right in and start learning, but I also want them to know a little about the past, present, and possible future of AI. Since I am a retired industry executive (Apple, IBM), who worked at an AI startup in the late 1970s after graduating from MIT, and then got a PhD in Computers Science/AI from Yale in the 1980’s I do have a few stories to tell.

The purpose of this blog post is to document a few day-to-day use cases of where I found generative AI helpful, share my responses to the students’ questions, and then to close with some reflections.

Day-to-day Use Cases
Today, I help the ISSIP.org non-profit, so many of the examples are connected to work activities associated with generating content for the ISSIP website or related presentation,

Case 1: Asked to write a short article about AI upskilling for a newsletter.

Shortly, after my co-authored book “Service in the AI Era” came out, ChatGPT was all the rage. Cecilia Lee, who was then ISSIP Editor-in-Chief, asked me to write a short newsletter article about AI upskilling. I recall cutting and pasting her request into ChatGPT, seeing what it generated – a bit bland, but very fast (under a minute a nice essay). Next, I used ChaptGPT to help create some DALL-E prompts, and experimented around to get some images. It took a bit of iteration – back and forth – with the tools before I was satisfied. Then I wrote by blog post from my own memory, and edited the image and posted it to the ISSIP website. You can read the final article in this newsletter here and read more about the exact process of creation using the AI tools here.

Case 2: Asked to speak at a retirement home where the average age of the men was mid-80’s about AI advances.

I jumped at the opportunity to speak to men in their 80’s about AI. I was thinking that some of them might have wanted to write a book, or generate the business plan for a startup, or something else – compose an opera – that they had not gotten around to in their busy lives before moving to the retirement community. The generative AI reduces the barrier for getting started so low, it is easy to just describe what you want and see what you get. I had fun listening to their goals, typing in a prompt, and watching their eyes get big when they say ChatGPT go to work creating a book, opera, business plan outline right before their eyes. My presentations always include the “dark side” (bad actors using AI) as well as open issues (energy, plagiarism, law suits), and describe the ethical use of AI. You can see my presentation to the men’s club at the Terraces o Los Gatos here.

Case 3: Mentoring students to learn to use generative AI

I also jumped at the chance to mentor SJSU MIS (Management of Information Systems) Honors students – who combine business and technology understanding – for a project where ISSIP was the client. What the students generated was awesome, and I share some examples of what they created videos, images, short essays, all packed in HTML code for an ISSIP webpage posting to explain a historic service innovation, such as the internal combustion engine or social media or robots. The students even created a playbook to help ISSIP volunteers to learn generative AI! Great stuff from students. I also helped mentor students from PSU (Industrial Engineering and Computer Science), CSULB (User Experience Design), U Washington Tacoma (Data Science and Analytics), and other places and majors – and will post what they created when I have a chance as well. Actually, it will all get posted to a new portion of the ISSIP website called the ISSIP Collab – so check back here in the future.

Case 4: Python Programming
When I write code, I use Google’s Bard – and find it is just so much faster at creating Python function with test examples than I am. It is a great ‘coding buddy” that I can delegate to, and get back code to use. I just cannot move my fingers typing as fast as Google’s Bard can. Of course, when I ask it to write code that I have written dozens of times, it is easy for me to check that it is correct.

In all these (and many more uses cases), I always document the vendor, tool, date of usage as part of ethical usage of AI. I do not hide the fact that I used AI to generate things. I cite the usage. For example, see these two ISSIP Ambassador blog posts, where I helped the ISSIP Ambassador’s create an image to accompany their great blog posts on service innovation topics. Example1 is service-innovation and human-centered AI for socio-technical systems design, and example2 is service innovation and financial services and fintech. I also use multiple tools OpenAI ChatGPT, Google Bard, Antropic Claude-2, and Microsoft Bing AI – to compare the results to find errors – since today’s AI is impressive, but imperfect.

Q&A&C (Questions, Answers, Comments)

AV
Q: How can we build our own digital twin?

Jim Spohrer (Guest)
A: Follow Kyle Shannon: https://www.linkedin.com/in/kyleshannon/

DJ
Q: Are you using a tool to build your digital twin?

Jim Spohrer (Guest)
A: I am using LLM chat tools to help me research and design my digital twin, including Anthropic’s Claude which allows PDF uploads. Also, I study the many tools recommended by Kyle Shannon: https://www.linkedin.com/in/kyleshannon/

AU
Q: Thanks for the presentation, how should we start to create our own digital twin?

Jim Spohrer (Guest)
A: Who to follow to build your digital twin – Kyle Shannon: https://www.linkedin.com/in/kyleshannon/
I will certainly add more – but in general my answer to all questions is someone to follow, who is deeper than I am in a particular topic area. My short answers are a reflection of what I have learned mostly from others and my own experimentation. I urge everyone to list out a diverse set of people to follow, and ensure some of those you follow share specific tools and prompts to try for your own experimentation, and to build your own set of use cases.

PC
Q: Can you give more references to learn more about Digita Twins?

Jim Spohrer (Guest)
A: Yes, my co-authored “Service in the AI Era” book and my presentations have many references. Just to warm up on the topic, I recommend: Your own digital twin is coming: BiblioW2022 Wakefield J (2022) Why you may have a thinking digital twin within a decade. BBC News Online. URL: https://www.bbc.com/news/business-61742884 Quotes: “We are living in an age where everything that exists in the real world is being replicated digitally – our cities, our cars, our homes, and even ourselves.”;

AV
Q: Why should one build her/his own digital twin? What can one do with it? What are the benefits?

Jim Spohrer (Guest)
A: Great question! My guess below… Before one can be a “responsible actor” one must become an “aware actor.” The first reason to work on your digital twin of yourself is because large companies are already working on it: Amazon to predict what you buy; LinkedIn (Microsoft) to predict what job you might be best suited for; Facebook and all social media platforms to predict what information you want in your timeline as you scroll. In fact, I predict a company will approach you (within the next two years, sometime before 2026) offering you a digital twin of yourself, and showing you some compelling use cases of why you need a digital twin. People will find living without a digital twin of themself as strange as living without a smart phone. It will become that useful.

ATN
Q: Is this you or your digital twin delivering the speech? How do we know for sure this is not your digital twin delivering this presentation?

Jim Spohrer (Guest)
A: Exactly! Someday the only way you will know is because I want my digital twin to identify itself as my digital twin, so people know they can ask it all kinds of questions to get my perspective. I might require similar access to your digital twin before I will respond to some of your questions. In my EIT digital presentation that I posted to slideshare, I have a number of things that I am working on in my backup slides – so check them out – including “Topics for Discussion” – Beyond Language for Communications: “Here is how my AI, using my digital twin of you, predicted that you would respond to my request – could you please ask your digital twin of yourself to check this response and suggest improvements?” Hopefully our digital twins (collectively – within a company, within a city, within a nation, or even globally) will allow our opinions on a wide range of topics to be shared very quickly to evolve better policies and better informed citizens. Think – “let’s solve the UN Sustainable Development Goals” – for example.

YZ
Q: How do you feel about your digital twin? What do you think are the boundaries between being monitored and being helped? Do you think this progress is controllable?

Jim Spohrer (Guest)
A: Good questions. I feel I need to try to build my digital twin, but as my AI helper, but also because large companies are doing it as well – for their purposes, not necessarily my purposes. I am pro open source builders and makers. Yes, I do not want a company or bad actors to hijack my digital twin – so the boundary between being monitored and helped is a slippery slope indeed. One cannot afford to become lazy or complacent about these issues. No, I do not think progress is controllable. However, I do think people are resilient and can spring back from disasters. At the end of my presentation is a pointer to a book by Dartnell called “The Knowledge.” I think it is important for people to think about disasters (a bit, but don’t become a doom scroller – resist that tempation) and prepare to be resilient.

AA
Q:What is the technical field in which you can foresee the most intense disruptions thanks to AI progress?

Jim Spohrer (Guest)
A: Some of the people I follow see the biggest short-term impact on gig workers who do art production, marketing copy production, video creation, music creation, etc. I will try to find a pointer to add, but in general start with Ethan Mollick (UPenn. Wharton) “One Small Thing” on Substack. Reminder my presentation for EIT Digital listed above has a slide on who I follow. However, I also follow some people that see scientific advancement as ripe for disruption with accelerated scientific discovery. So long term, I expect the scientific disruption and the discoveries that will result including about the human brain, the evolution of life and the universe, and the understanding of the evolution of service systems in society will have the biggest impact.

AU
Q: What happens if their is a mistake in AI, how do you fix it? i.e. a wrong prediction of what I’m going to buy .o)

Jim Spohrer (Guest)
A: Most of the LLMs (Large Language Model) chats – have a feature for giving feedback from the user. The user can also say in the chat something like “That was not helpful because of X. Please try again and this time bias your probabilities with this fact about my request Y.” Or more generally, just type – “The last answer was not helpful. Please ask me some questions that I can answer to help you, as a vendor-controlled AI tool, generate a better and more helpful response to my previous request.”

PKN
Q: How do you think AI is going to develop in a way to answer more generalized question, like moving disk puzzle, from 3 columns to 4

Jim Spohrer (Guest)
A: Check out Google Deepmind’s AlphaGo – my summary slide on the history of AI includes a reference. There is also research on LLMs that write programs to solve puzzles that require recursion like the disk puzzle. Also OpenAI’s early playground work was very impressive to me. I am not sure I follow anyone specific on game play AI and that active research area – but suggest Matthew Berman on Youtube.

AU
Q: How reliable & repetitive are answers in ChatGPT if the user clicks on repeat (circular arrow)?

Jim Spohrer (Guest)
A: After only limited experimentation with OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Bard, Microsoft’s Bing AI, I am not sure I can provide a good answer. Sometimes it seems to vary widely depending on the prompt/task and other times it seems to get stuck or confused. After all these are just “monkey at the typewriter” and “stochastic parrots” that predict the next token based on questionable training data of variable quality – some fact, some fiction, social media rants, and other noisy data sources. Some of the people I follow give a lot of prompt engineering advice on how to get better answers, but for me they seem to have very limited utility. Again, I am sure there is someone to follow out there with much better answers. Professor Ernest Davis (NYU) used to do a lot of testing, but when the vendors got more secretive, he posted “without the following information, I cannot do scientifically useful experiments” – so he had to reluctantly give up. You ask a great question, and I think governments who care about fighting misinformation using scientific methods should care about the situation. Who to follow? Not sure…

AU
Q: Can you say a few words about data handling? Imagine being a bank company. Would you want your employees to use ChatGPT? Would you want your devs to use Copilot?

Jim Spohrer (Guest)
A: I would assume that some of them (employees) are already using it (AI) without my permission. So if I was a leader in the company, I would quickly put in place a mandatory education module for all employees about safe and permitted usage patterns, and unsafe and not permitted usage patterns, and then work with HR to roll this out to all employees, and let employees know that if they violate the rules that their employment can be terminated. I would also set up a slack channel for employees to ask questions to each other and share answers along with several expert employees who have responsibility for being lead influencers inside the company. As mention, IBM was very good at doing things like this quickly, so “responsible employees” could quickly become “aware employees.” IBM also (I am retired) had an annual BCG (Business Conduct Guidelines) that every employee had to take the course and sign off, so their direct managers knew the employees were aware of the rules. A lot of the existing materials for using social media and how to treat confidential data would apply. My first three rules would be: (1) First rule, NEVER upload confidential data to an AI tool. (2) Second rule, NEVER trust the results unless you have verified them with authoritative sources, and (3) if you have used an AI tool to help create anything for the company, customer, partner, etc. – you must indicate that in the deliverable, and keep a personal log of the tools name, vendor, date of use as well as errors detected and corrected. This is a rapidly evolving area, and again – who to follow is key.

SKE
Q: How do we navigate wisely the paradox of limitations by f.ex. GDPR and company polices/ IT-security etc, while also being encouraged by employer to be enthusias…see more

Jim Spohrer (Guest)
A: See answer above. Company’s and organizations need to have “aware employees” in order to have “responsible employees.” All technologies create harms. Even buttons are a choking hazard to small children – many buttons look like pieces of candy. Besides accidents, we know all technologies can be used by “good/responsible actors” to cocreate benefits in business and society or “bad actors” to create harms or mischief. So we need “aware employees” and “aware actors” in business and society. The foundations of democracy depend on an educated and aware population that can take responsibility for the consequences of their actions in a civil society. I recommend following Prof. Gary Marcus (NYU) as he seems to be very interested in European regulations at the moment.

HN
Q: how would one use AI to set higher goals?

Jim Spohrer (Guest)
A: Great question. My personal recommendation would be to ask the AI tool to summarize the UN Sustainable Development goals – as examples of higher goals that a person might pursue. Using the AI tool to give examples of higher goals that people can pursue today is important. Then I would search for people and news sources to follow that work on higher goals (for example FutureCrunch – good news you do not hear about). Next, I would let the AI tool know about me, my strengths, my weaknesses, what I think i like, what I think I don’t like (this is the beginning of your digital twin) – for example, upload your resume or bio or CV to Anthropic’s Claude, and ask the AI tool to create a summary of you. Depending on many factors, you can explore higher goals that are well suited to you and your situation in life. I expect OpenAI will have a GPT in the marketplace for motivational interviews of people seeking higher goals – more ambitious goals to work on. I also recommend reading Adam Grant’s book “Think Again” as well as Damon Centola’s book “Change” – these lay the foundation for a lot of self-analysis. There could be a thousand interesting answers to your great question – I hope this attempt at an answer inspires you to look for other answers and people to follow to learn from as well.

AL
Q: What can we do to avoid the internet being polluted with generated content that in turn will feed into the training data of future AI

Jim Spohrer (Guest)
A: Great question. I don’t know. I think it will be pretty impossible without further advances of AI tools to a generate-test-and-debug architecture (beyond the predict next token architectures of today, the stochastic parrot, the monkey at the typewriter, the latent space explorer – which are pretty good at creativity – but not very good at truthfulness). I am giving a talk on this topic later today for NextColab on dealing with Hallucinations. In the talk, I recommend checking out Q* as explored by the AI Explained YouTube channel that I follow. These are early days. AI is impressive, but imperfect, but it will get better. Once AI can get 100% of basic math questions correct, then it will have a foundation to build upon for what is true. Some things we can know are true because of mathematics proofs – these are probably the strongest truths that an intelligent entity can know (a person, a cognitive systems entity). Once we have this foundation in place, then we can get AI systems to understand “computation truths” – for this I recommend following Stephen Wolfram and perhaps start with Lex Fridman’s interviews of Wolfram. Computational truths are another foundational building block for a system to be able to know what is true about reality. From there, we have to go to scientific “truths” – which is of course knowing authoritative sources for the most part. Each type of truth in reality must be clearly understood, before we can get past the current state of misinformation in the world. This is an important goal to work on – and thanks for the question.

AU
Q: Hi and thank you for this great talk. Do you believe that in the future, say 2080, we will be able to build AGI? and what are your opinions on that? Thank you in advance.

Jim Spohrer (Guest)
A: Yes, AGI by 2080 seems doable. Alan D. Thompson (YouTube) is very optimistic about AGI arriving sooner. His bar deals with IQ testing – which I think of as a “low bar” – easier test, simply because the amount of information in LLMs is already humanity-scale (Wikipedia and beyond), and because so many answers to complex IQ-like puzzle questions are on the web, so systems that have a “higher IQ” depending on how you measure it is a “low bar.” Also, see Levels of AGI: Operationalizing Progress on the Path to AGI which seems to me to be a “lower bar” approach, based on comparisons to human-level-intelligence in a population of people with capabilities to perform on a range of tasks. Prof. Gary Marcus (NYU) and Prof. Ernest Davis (NYU) would be better judges of AGI – they have a higher bar based on deep understanding of cognitive science and AI. Personally, AGI to me will require an episodic dynamic memory – read Roger Schank’s book “Dynamic Memory” for an in depth discussion. Our individual identities are based on our episodic memories of our own lives. So to me AGI will have to have an identity, and the question is will it exist in isolation (a one off like “Commander Data” the android in Star Trek) or in a larger population fo AGI – I suspect the latter. If you have a population of AGI with episodic memories and identifies, it is more like Robin William’s in Bicentennial Man – where the interactions with people will be very complex and intertwined friendships, and even seeking legal societal rights as they show responsible actions. I have written about my views of the difference between cognitive systems (which include animals, such as dogs that have social interaction skills) and service systems (like people, businesses, nations) that are also cognitive systems but they have rights and responsibilities. Taking responsibility for the consequences of your actions is what an AGI would have to demonstrate to me, before I would but the “AGI solved” sign out. Prof. Tom Malone (MIT) writes about super-intelligence and that companies are already super-intelligence entities with identifies and responsibilities. I see true AGI being more like when the notion of the company and limited liability corporation were formed. It will require a legal foundation, not just a technological achievement about high IQ systems. I do not see this happening until between 2035 and 2040 at the very earlierst, but perhaps not until 2060-2080. That is my “higher bar” definition, and timeline reasoning. So I do not consider achieving AGI a purely technical thing, it is also a social and regulatory achievement – so much harder. Intelligence without accountability is not true intelligence. To define intelligence, you also have to define accountability.

FDA
Q: First of all, great talk. Secondly I am student and I use chatGPT on a daily basis and I noted that in some way I got “”addicted”” to it for example to write cod…see more

Jim Spohrer (Guest)
A: Great question. People get addicted to drugs, video games, social media – and clearly AI and the Metaverse will fuel even greater addiction in some people. Like breaking any addiction, it requires surrounding yourself with many people who are not addicted. Addicted people like to hang out with each other – and certainly online and the digital transformation of business and society make this easier than ever. I try not do to more than six hours of screen time a day. I do not read books online, because I do not want that screen time burden added to my usage. I hope my digital twin will be able to help me reduce my screen time even further. My best advices to avoid digital addiction is surround yourself with people who are not digitally addicted, and do activities with them. The higher purpose activities are good, and also just being social on a hike or doing amateur sports together. Go camping from time to time with no screen time. Who to follow on this topic may be hard, since we tend to follow influencers in the digital world!

ME
Q: what’s been this GDP improvement due to AI in the last many years? can you name examples?

Jim Spohrer (Guest)
A: Great question. A bulldozer versus a shovel for digging holes. A spreadsheet versus an early-paper tape calculator for helping with the books of a business. Better tools – better building blocks – boost productivity and GDP (Gross Domestic Product) of nations. The biggest bump in GDP historically have come from social changes rather than technological changes. For example, women entering the workforce is one way to double overall GDP, but that does not necessarily boost GDP per worker (much). To boost GDP per worker, you need more productive workers – like workers who have access to better tools or more efficient (right plan) and more effective (right goal) methods. For example, digging a ditch with a bulldozer, instead of a shovel. Doug Engelbart was one of my mentors, and his 1962 paper on Augmented Intelligence is a historic document in my mind. Doug is remembered as the inventor of the computer mouse, but long-term he will be most remembered for all his works on boosting our collective IQ to work on complex and urgent problems. Doug had truly “ambitious goals” – and he should be studied, including his historic 1969 demo which is called “The Mother of All Demos.” While I find Alan D. Thompson (YouTube) overly optimistic on AGI, I think he has done some inspiring work thinking about better tools, compared with our tools of “the olden days.”

AU
Q: Are there any generative AI use cases on resource constrained edge devices out currently?

Jim Spohrer (Guest)
A: AI is cetainly moving to all mobile phone towers for 4G and 5G – but not sure if that is resource-constrained by your definition. Check out the ISSIP.org blog post series, and look for information about last year’s winner for ISSIP Excellence in Service Innovation (Bluetooth Low Energy Related) awards by Armen Maghbouleh (ISSIP YouTube recorded his great talk with use cases), as well as blog posts by ISSIP President 2023 Utpal Mangle (IBM GM Edge Cloud and AI) as well as Christine Ouyang (IBM Distinguished Engineer) and the ISSIP Ambassador lead. Please join ISSIP.org while you are at it – free sign up, and you get a monthly newsletter, and opportunity to keep learning along with us.

AL
Q: Awesome talk (as always!), Jim! What do you consider a top ‘impressive but imperfect’ issue in higher education at this time?

Jim Spohrer (Guest)
A: Great question Ana – and I miss the days when you were in my Service Research Group at IBM Almaden – Research in San Jose, CA (Silicon Valley) – we could have lunch and talk about this great question. See Prof. Ethan Mollick (U Penn) his “One Small Thing” Substack weekly posts have addressed this question of today’s AI tools in education. I also recommend connecting with Prof. Terri Griffith (Simon Frasier University in Vancouver Canada) – she is also former ISSIP President in the past – and she has great insights into using today’s AI tools in education as well as for productivity in business. It is important that people remember to disclose when they use AI tools – that is what is most important. Students (and faculty!) should be encouraged to use AI tools, but to do so ethically and always disclose which vendor, which tool, what date, and some aspects of the errors, corrections, checking, and division of labor. This is extra work and will impact productivity, but it is the only ethical way to use these tools today that I see.

GR
Q: Many point of views regarding AI, especially on the ethical side. I noticed that the more I listen to podcast against it, the more I avoid using it. What is you…see more

Jim Spohrer (Guest)
A: Great perspective. I wrestled with this, and decided avoiding their use was not as good for society as embracing them and trying to be an “aware actor” working for ethical usage. See Noam Chomsky if you want an opposing view from mine. Also, check out Prof. Joseph Weizenbaum (MIT) who invented the first chatbot called Eliza in the 1960’s for an opposing point of view to mine. I had Weizenbaum as a professor when I was at MIT. Still, I think it is better to be aware of the strengths and limitations of the tools than to avoid using them. However, I agree there are other points of view – equally valid. I very much respect the Amish, even though I am not one of them. Do you think “bad actors” will avoid using AI tools? No, bad actors will learn AI and exploit it, so we had better prepare. Responsible actors need to become aware actors. However, this usage of potentially unethical and illegal tools (courts will decide), can lead to lose-lose cycles called ‘molloch’ (I follow Liv Boeree).

AGH
Q: The productivity increasing that you showed, GDP/Employee can be increasing GDP or reducing employees. How do you see that evolution?

Jim Spohrer (Guest)
A: Both for sure – in waves. Big companies may have fewer workers (reducing employees), but the workers that leave will go to more entrepreneurial ventures where they can then upskill faster. Ultimately the difference between customer and employee disappears, and it is more about business-like-sports that have super entrepreneurs racing to create unicorns (Z2B – zero to a billion in revenue, or users, increasingly quickly). Z2B is not possible without customers, but aren’t the customers actually (by using the service system) contributing to its improvement and growth – which is what employees do? Regarding GDP per worker calculations for the USA, check out my blog post – I just get the data and use Wolfram Alpha and Microsoft Excel spreadsheets to follow the trend. But the bigger trend is clear as well.

AU
Q: Thanks for the amazing talk! Very inspirational Can you elaborate more on the GDP/person measure?

Jim Spohrer (Guest)
A: Yes, check out my blog post. GDP/worker – not GDP/person – is a measure of a nations ability to augment its workers to make them more productive. I also recommend Don Norman’s book “Things that Make Us Smart” and W. Bryan Arthur’s book “The Nature of Technology.” William Rouse and I wrote a journal article that explores GDP/worker and the lowering cost of computation (AKA Moore’s Law).

MLA
Q: What would you most recommend us trying with IBM Cognitive OpenTech?

Jim Spohrer (Guest)
A: Great question. While I have been retired for over 2 years, I would recommend checking out IBM’s contribution to the Linux Foundations AI & Data Foundation. IBM has contributed some awesome tools as have other companies. IBM’s contributions have included tools for helping to build trustworthy and explainable AI. I am sure that Bill Higgins, Susan Malaika, Jeffrey Borek, and others at IBM are continuing to push the envelope on open innovation.

JH
Q: Thank you! Can you share the tools with links or names?

Jim Spohrer (Guest)
A: Sure – here you go:
Chatgpt.openai.com
Bard.google.com
Claude.anthropic.com
I keep all three open in a browser, as well as Microsoft Bing AI, and compare results – this helps me spot errors more quickly. For example, fire up the AI tools, and try this prompt.
Prompt:
“Please create a table that lists the following innovations in column 1: Plow, Cities, Writing, Standard Measures, Written Laws, Money, Compound Interest, Compass, Universities, Clock, Steam Engine, Constitutional Government, Universal Education, Lightbulbs, Automobile, Installment Payment Plans,, Credit Cards, Online Trust (e.g., eBay reputation system), Ride sharing, Room sharing. Please also include a second column with the approximate year of invention. Please add a third column with the major benefit of the innovation. Please add a fourth column with any harms created or enabled by the innovation.”
Have fun!!!

AU
Q: Could you provide links to the different AI programs?

Jim Spohrer (Guest)
A: Yes, see above and below for some of them, but also check out the lists on the slides in my presentations, and start following some of the influencers that I follow to learn about more AI tools for a wide range of tasks. Also see:
• #1 Magic Eraser – Have a great photo but with something annoying in the background? Remove it easily: https://www.magiceraser.io

• #2 Craiyon – Words to pictures: https://www.craiyon.com

• #3 Rytr – Writing tool: https://rytr.me

• #4 Thing Translator – Picture to words: https://thing-translator.appspot.com

• #5 Autodraw – Sketch to Drawing: https://www.autodraw.com

• #6 Fontjoy – Font pairings made simple: https://fontjoy.com

• #7 Talk to Book – Ask questions to 100,000+ books: https://books.google.com/talktobooks/

• #8 This Person Does Not Exist – Need a face that belongs to nobody? https://thispersondoesnotexist.com

• #9 Namelix – Need to name a project? https://namelix.com

• #10 Let’s Enhance – Improve image resolutions and clarity: https://letsenhance.io

Some may have already disappeared in failed startups and reappeared in new startups.

PI
Q: Do you used multiple AI application same time? Discuss them in same time.

Jim Spohrer (Guest)
A: Yes for research and writing code/programs I use:
Chatgpt.openai.com
Bard.google.com
Claude.anthropic.com
I keep all three open in a browser, as well as Microsoft Bing AI, and compare results – this helps me spot errors more quickly.
I also experiment with other AI tools – new ones every week – that create images, videos, music, and much more. These tools are like a “digital muse” – impressive, but imperfect, but getting better over time. See the use cases above, as well as the playbooks students are generating.

AU
Q: What are the best AI tools you’ve find besides Chat GPT?

Jim Spohrer (Guest)
A: I use:
Chatgpt.openai.com
Bard.google.com
Claude.anthropic.com
I keep all three open in a browser, as well as Microsoft Bing AI, and compare results – this helps me spot errors more quickly.

AU
Q: What is your opinion about the EU AI act and similar frameworks? How are we going to safeguard us from…

Jim Spohrer (Guest)
A: I follow Prof. Gary Marcus (NYU) who seems to be doing a good job tracking this and thinking about the issues. Better than me.

IDW
Q: Do you have a name or a source with more information on AI in a dystopian and utopian setting that you discussed shortly before?

Jim Spohrer (Guest)
A: Yes, for Utopian see Alan D. Thompson (YouTube) and for dystopian, the people who can scare me the best are at HumanTech Harris and Raskin – see:
BiblioH2023 Harris T, Raskin A (2023) The A.I. Dilemma – March 9, 2023. Speakers: Tristan Harris and Aza Raskin. Center for Humane Technology. Via_Frank_Odasz URL: https://youtu.be/xoVJKj8lcNQ Quotes: “141,689 views Apr 5, 2023 Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails. For the podcast version, please visit: https://www.humanetech.com/podcast/”;

FN
Q: How do you keep your inspiration/motivation when using AI, if what you enjoy is actually the problem solving and not just ordering somebody/some AI to solve it?

Jim Spohrer (Guest)
A: Steve Jobs and Alan Kay talked about bicycles for the mind – not cars for the mind. Bicycles make you stronger at walking, not weaker. Cars make you weaker at walking, not stronger. We need to better design technologies to make us stronger not weaker. The best way to stay inspired and motivated is to have truly ambitious goals that may not even be solved in your lifetime, but are important for humanity. Read Krznaric “The Good Ancestor.” Staying inspired and motivated is much easier for some people than other people, but everyone can learn. Tricks. The older you get the more tricks you develop and the easier it gets. Since I grew up on a farm and like to feed the animals in the early morning, I developed some hobbies and tricks. I get up and outside walking looking at the stars in the morning and thinking about the history of the universe and getting to know all the stars as best I can. I am up most days by 4am – and reading books. There are tons of YouTubes about developing philosophies of life (see “Stoics” for example) that help you jump out of bed in the morning (See “Make Your Bed” for example). Or you can think about someone you aspire to be like. Or you can think about the people who depend on you. But structuring all these thoughts, activities, into tricks that work for you is an effort in individual exploration. What works for me is unlikely to work for you. Many people have created list to help others. I have not created such a list yet. I am still exploring. This is a common question I get – so I should come up with a good answer. I believe a positive and growth mindset is important. I also think it is important to compete only with your past self and not others. Learning to invest systematically and wisely in becoming a better future version of yourself is a good “mission statement” for your digital twin once you build it.

AU
Q: Do you think that people/jobs will be completely replaced by LLMs? You feel that is an opportunity (e.g. cheaper to innovate) or a problem (e.g. misinformation)?

Jim Spohrer (Guest)
A: No, there is no end of work (purposeful paid activities) for people. What specific types of activities and how/what type of payments occur to people will change, as it has over the last two centuries and last two thousand years. Lies (misinformation) have been a problem throughout recorded human history, and people do not like to be lied to or deceived. Parts of the stories change, and parts of the story do not change.

EM
Q: In which area do you see AI moving faster? Health? Urban Mobility? What impresses you?

Jim Spohrer (Guest)
A: Personal productivity for content generation and scientific process are the two that I am watching for generative AI. For robotics, I watch robots for farms and home maintenance. I also watch for geothermal energy and other things – see the last of my slides for my bio slide which has my interests and change maker priorities.

FG
Q: Could the concentration of power of a few companies developing AI be problem?

Jim Spohrer (Guest)
A: Yes. Read Prof. Tom Malone’s writing online about “Super-intelligence.” Check out “Win-win democract” by Lee Nackman (retired IBMer). Also search for “regulatory capture” for more scary stuff.

AU
Q: There were rumours about AGI being achieved in OpenAI during the Sam Altman’s drama. What are your views…

Jim Spohrer (Guest)
A: Check out AI Explained on Q* – it may be a breakthrough. Too early to tell.

AU
Q: Thank you. Great lecture. Why does ChatGPT prefers images to PDFs?

Jim Spohrer (Guest)
A: I don’t know. Anthropic’s Claude is pretty good with PDFs.

AU
Q: is there any way to mark AI generated materials to id it? like a digital key or something. if you know any work or studies about it

Jim Spohrer (Guest)
A: In the digital age, you can try – but these are easily thwarted. I think Prof. Gary Marcus (NYU) has made some posts on this topic.

AL
Q: for Jim: what about symbolic AI? Will we see a come back as part of e.g. one-shot learning?

Jim Spohrer (Guest)
A: For sure someday. The work of Ken Forbus and Tom Diettrich (What’s wrong with LLMs, and how to fix them) is good, Also see Prof. Gary Marcus (NYU) and his substack remembrance of Doug Lenat and the Cyc project.

AV
Q: How can we use it in teaching?

Jim Spohrer (Guest)
A: See Prof. Ethan Mollick (UPenn Wharton) and his posting on Substack – lots of ideas.

ITF
Q: can you connect a GPTs to your webpage?

Jim Spohrer (Guest)
A: A question for Marco Podien, I think I was ask ChatGPT that question as well. Here is what I found in th Neuron AI Newsletter – How to Add Custom GPTs to Any Website in Minutes (OpenAI GPTs Tutorial), By Liam Ottley (127K subscribers).

AU
Q: is there any way to mark AI generated materials to id it? like a digital key or something. if you know any work or studies about it

Jim Spohrer (Guest)
A: Like a watermark? These are “easily” defeated. However, perhaps I do not understand this question. Are you thinking that regulators need to require that AI vendors ensure that all their AI content has an “unremovable watermark” that indicates the vendor, tool, date for the AI content? I like this idea from the perspective, that I want to use many generative AI tools, and yet have a single diary of all my usages with this information, for my own person use. Are you suggesting, that anyone be able to query content to see if there is an “AI generated” version of it out there? For law enforcement purposes? Thanks for an interesting question, but not sure I know what you mean (or are asking for) exactly. Knowing the intended purposed would help.

MS
C: AI is pervasive today, and the risks are often hidden, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about …see more

EM
C: That is it. Take the risk, amazing talk!

AA
C: Europe is the second phase of building the world’s digital twin (Destination Earth) embraced and funded at EU level

TH
C: A quite interesting questions asked to ChatGPT: If you combine Pokemon Go and Roblux what do you get?:-)

AU
C: About ChatGPT not knowing about the impact: we all were there, I think we all remember at the beginning having struggles in getting answers for the requests ove…see more

AU
C: Her name is Lyra if a ask it as well =)

SKE
C: I use it for editing texts, writing outline for speeches, directing text for a particular target receiver,…

AU
C: I use it for formatting various studies, but always use a standard approach with reliable sources to…

AU
C: Improving the language on my website and the website at work. It’s unbelievable how well it works.

AU
C: I use CGPT for coding is great when you are stuck

AY
C: What I do is use API access instead of ChatGPT even for chats. There, access to GPT4 and the new GPT4 turbo is available. Looking forward to obtaining GPT4v acc…see more

AU
C: Just a note: I can’t see the meeting chat, only the Q&A is available for me in the teams UI, I couldn’t reply to the survey. 🙁

AK
C: We are having an expected issue with the chat.. Please bear with us as we try to solve it.

OP
C: Please follow the link for the poll https://forms.office.com/e/i1ZhAB7PUn

OP
C: Please use the following link to fill in the Poll https://forms.office.com/e/RD7eud7UCF

EM
C: there is a little inconsistency on the questionnaire, 100M instead of 100B users for ChatGPT. Great first class, congrats for EIT Digital! 🙂

AU
Q: How will the results of the quiz will be checked for the eligibility of the certificate?

EN
C: It was a very informative session. I have already started to diversify the AI voices I follow. Before this lecture my following was biased and I had never really given it any thought. Thanks again for an informative session.

DM, AA, AU, JM, BZ, ATN, AA, TM, LK, PC, CM, FDA, AA:
C: Thank you both for the amazing and insightful presentations!! Thank-you for the talk. Thanks for the wonderful presentation! Great presentation, thank you! Thanks for the brilliant presentation! Thank you for the amazing presentation and sharing! Thank you for the amazing presentations!!👍 Not a question. Thank you for sharing your experience and knowledge! 🙂 Thank you. Great lecture. Thank you for this amazing presentation. Great presentation:-). thank you Jim. Not a question, but thanks for the lesson! Looking forward to more specific courses maybe! (CS student). Thanks for the inspirational and great talk! Guys thank you so much. It’s been a while since I joined such interesting presentations.

SM
C: thank you Jim!

Jim Spohrer
A: Thank-you for inviting me Salvatore – and thanks for your awesome support of ISSIP.org over the years! Your support for ISSIP and Service Innovation has been huge and significant. Thank-you!

Jim Spohrer (Guest)
C: As I have a chance – over coming weeks – I will try to address some of the questions and comments in the Q&A here in my blog post – https://service-science.info/archives/6521

Reflections
For me, the productivity and quality advantages of generative AI are quite clear. The generative AI is like a memory with fingers for typing, drawing, and presenting. The typing can be essays, tables, programs, and more. Sometimes when I want to do things, I know it will take a lot of typing or moving my lips (speaking) and I can describe what I want (prompting) faster than actually doing the memory retrieval and typing, even for things that I have done many many times. I guess that is the point.

Reason 1: Generative AI is faster and easier at some routine tasks where I am an expert at doing them, but the generative AI is faster and the quality is better (for images).

For example, in programming, when I code a function that I have done dozens of times, or in creating a table or image that I know I can create, but it will take a lot of time and effort, or when I have to give a talk to a different audience, for a snipette of the talk I have done dozens (if not hundreds!) of times.

TO DO
For fun with the Ambassadors, to give them creative ideas or to give myself creative ideas.

For making lots of stuff – historic service system cases – to explain something complex.

For thinking about serious science when AI gets a bit better.

For doing new and unique things like writing the history of all humanity when AI gets a bit better – it is a kind of collective memory system.

For thinking about the evolution of service systems.

References

EIT Digital Generative AI:
https://professionalschool.eitdigital.eu/generative-ai-essentials

Spohrer’s Presentation for EIT Digital on Generative AU:

AI Upskilling Newsletter Request:
The newsletter article:
https://us7.campaign-archive.com/?u=d0f540537d3ef307e062e3dd6&id=c409413dfe
The process of creating it:

Upskilling With AI: Part 1

Spohrer’s Presentation to men in their 80’s at Terraces of Los Gatos:

ISSIP AI Collab – working with students

AI COLLAB Offering Details


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images