Among the unintended consequences of AI applications built with few guardrails is that conventional views of what is intellectual property are upended – I found out the hard way!
Well, that was a surprise!
An acquaintance contacted me on LinkedIn about something unrelated, and then congratulated me for being forward-thinking enough to work with an AI bot associated with the Board of Innovation. “Say what?” thought I. I know of the Board of Innovation, but I’ve never worked with them or done any joint projects. So I thought I’d check it out.
Sure enough, whoa! Not only was I on their website as a provider of AI enabled strategic advice, but so was Michael Porter, Peter Drucker, Clay Christensen and Sun Tzu. I suppose it is flattering to be included in that company, at least, as my friend Chris Yeh reminds me. To add to the creepiness of it all, the bot was advertised as offering “personalized strategy, without the expensive fees.”
So I thought I’d give it a spin. Worse even than my name and reputation being used, the responses from the bot were awful. I mean, really bad. It confused my work with some work done by McKinsey, provided guidance that I found kind of horrifying and would have been a black eye on my reputation if anyone took its responses seriously.
Of course, the call to action was to contact the Board of Innovation for more information.
I wrote to the Board on their “Contact Us” page and asked that this be taken down right away. Meanwhile, my colleague Ron Boire posted about it.
Here’s what he said:
When AI goes way too far: The Board of Innovation has published a ChatGPT bot that purports to give advice as Rita McGrath, who has never given permission for them to use her name, much less pose as anything close to the strategy advice she would give. This is theft of a globally recognized strategic thinker’s name and reputation! It gets worse when you use the “Rita McGrath” bot; the advice it gave me used the Horizons framework, a framework that Rita has never promoted and didn’t develop! This nonsense and outright theft must be called out and stopped! BTW, they have a convenient link at the bottom of the “chat” asking the user to connect with the Board of Innovation if they need strategy advice – lovely!
As an example of the bogus advice the thing was throwing out, see this link (Ron captured a screenshot of what its guidance was):
The Board’s response
I did get a note back from the Board, who said “The AI toolbox for innovators was one of the team’s early experiments with building a skin over the OpenAI API.” They did take it down and now you get a 404 message when you try to go to that page, which was here:
Note that the header for the page is still there, but they did take down the content.
Hoovering up the hard work of others
This is just one example – and a very modest one, with few serious consequences that I know of – of how AI with no rules and guardrails can get organizations into hot water. A far more consequential issue is raised by William D. Cohan, writing in the Washington Post.
As he says, “The other day someone sent me the searchable database published by Atlantic magazine of more than 191,000 e-books that have been used to train the generative AI systems being developed by Meta, Bloomberg, and others. It turns out that four of my seven books are in the data set, called Books3. Whoa.
Not only did I not give permission for my books to be used to generate AI products, but I also wasn’t even consulted about it. I had no idea this was happening. Neither did my publishers, Penguin Random House (for three of the books) and Macmillan (for the other one). Neither my publishers nor I were compensated for the use of my intellectual property. Books3 just scraped the content away for free, with Meta et al. profiting merrily along the way. And Books3 is just one of many pirated collections being used for this purpose.”
As he quite rightly points out, the big tech company’s “stock-market valuations have soared this year, thanks in part to their AI announcements and products, which are largely dependent on hoovering up the hard work of others.”
What would be the right and correct thing to do? According to Cohan, “The AI companies should pay authors a fair price to option their books for the right to consume their contents, just as Hollywood does when embarking on a film, documentary or television series. (Apple reportedly paid Michael Lewis $5 million for the movie rights to his new book about Sam Bankman-Fried.) And then also agree to pay authors royalties, if there are any to be had.”
Right now, there seem to be no legal frameworks or protections that would allow people (like me) to keep anyone from hijacking whatever material is in the public domain and putting it to whatever purpose they deem appropriate. Back to Cohan: “To get companies with a combined market value in the trillions of dollars to stop stealing intellectual capital from writers might even require congressional action. The sooner the better.”
Any new technology takes a long time for us to appreciate its eventual form. But just as we probably wish we had some framework for the private use of highly personal data to serve advertisers, we are most definitely going to wish we had taken a nuanced and thoughtful approach to the ownership of intellectual property used to train these powerful large language models.
You don’t need to see around too many corners to realize that.