Meta: early indicator of Zuckerberg’s impact on education

Last week the Chan Zuckerberg Initiative (CZI) bought a company called Meta. This small acquisition is potentially seismic in its implications for the worlds of science and education.

CZI is the philanthropic vehicle for Priscilla Chan and her husband Mark Zuckerberg (I put the couple in that order because they do). After the Initiative was announced with some fanfare, there have only been a few announcements – ambitious, striking, but mostly short on detail. Chan Zuckerberg Science was announced with the intention of “helping cure, prevent or manage all diseases in our children’s lifetime.” The highly regarded Jim Shelton was appointed head of Chan Zuckerberg Education. They announced their first lead investment in edtech and employability company Andela, alongside a group of other well-known investors in the space. So far, so expected.

The Meta acquisition is different. In the company’s own words, Meta is “a tool that helps researchers understand what is happening globally in science and shows them where science is headed.” Facilitated by the move towards Open Access in scientific publishing, the venture-backed company was selling Artificial Intelligence-driven insights about the state and potential trajectory of research to publishers, pharma companies, and others.

Now, Meta’s toolset is going to be totally free, and the company has a clear mission to open up its technology to humans and machines, and collaborate with the research community to make itself better. The for-profit drive of the company has been up-ended, towards one of impact only. The venture investors are going to take their financial exit and leave (presumably having made profits they feel happy with).

This is unprecedented as far as I’m aware. I’ve never known a philanthropic organisation acquire a venture-backed startup and mould it to its own ends. Invest, make grants, lobby – all familiar stuff from the activities of the Bill and Melinda Gates Foundation and others. Acquire – no.

The effects of this are, in my view, many. There are relatively parochial issues about what happens to the world of Scientific, Technical and Medical publishing, which I spent several years analysing with my team when at Holtzbrinck. Market leader Elsevier’s apparent strategy of building competitive advantage through amassing proprietary data and the resulting algorithms looks challenged. Clarivate Analytics (previously part of Thomson Reuters) will be worried about the long term viability of its Web of Science product.

More interesting are the broader consequences and issues. I should say first that I feel that CZI is doing something really interesting and worthwhile here, and genuinely breaking new ground. But there are wrinkles.

  1. CZI just changed a commercial market. The value chain and power relationships amongst players in scientific publishing are altered substantially – not least as this is probably just the beginning of what CZI might do, given their financial firepower. Investors and shareholders will be worried and may hold off making spending decisions. Who knows what will be changed next?
  2. There is a risk that CZI actually reduce choice and innovation in the available algorithms and tools which analyse the corpus of scientific knowledge. It’s very difficult to compete with free. I suspect they have thought of this and may well want to put money into a range of approaches, but it’s worth pointing up.
  3. Venture capitalists may now have a new exit route to add to trade sale (i.e. sale to another company) or IPO: sell to CZI if your idea is clever enough and aligns with their theses. I wonder how soon I will see this in a fundraising document.
  4. There’s a clear issue of democratic accountability here. Without malice or deliberate intention on anyone’s part, we have ended up in a situation where two people can spend huge amounts of money according to their definition of “good”, and consequently affect a major component of our society. I suspect CZI are thoughtful and well aware of this, and consulted with the scientific community before making their acquisition. Nevertheless, this is another slightly scary illustration of how polarised our world is becoming.

So what about education? All of these issues are directly transferable. I would now be very surprised if Jim Shelton and his team don’t make some major moves which re-balance power in education ecosystems, particularly in the USA but likely more broadly. Investors and existing players need to take note of whatever signals we get from San Francisco.

Yet in my view CZI need to ensure they don’t transfer their model wholesale. Science has a clear, globally accepted value system based on “standing on the shoulders of giants” – rigorous process, peer review, publication. Teaching and learning are different. As I (and many others) have written before, establishing “what works” in education is a highly subjective, value-laden exercise. CZI will need to show how they are fostering innovation and impact in education with a diverse range of cultures and contexts – or unapologetically and publicly adopt a clear set of values, and argue passionately for them. They need to do these whilst avoiding being cast as latter-day cultural imperialists and facing a backlash. This is a tall order in a “post-truth” world where Zuckerberg has already struggled with issues of editorial responsibility in his day job at Facebook. The team have awe-inspiring ambition, power, and potential, which they have only just started to deploy. Things just got interesting.

http://meta.com/#letter

Education and Artificial Intelligence – critical optimism

We shouldn’t believe the hype or succumb to dystopian panic – it is vital to be simultaneously positive and reflective

Artificial Intelligence (AI) seems to be at peak media frenzy right now, with its effects on education frequently in the mix. My own reading in the last few days has included the first published output from Stanford University’s One Hundred Year Study on Artificial Intelligence, news of the American technology giants working together to address societal concerns around AI, and a review of Yuval Noah Harari’s new book Homo Deus, which follows on from his highly influential Sapiens to examine the future of mankind in the light of new technologies.

The attention is not new, even if the intensity and pitch seem higher. Education has long been touted as a sector where advanced machines can make a real difference – and as Audrey Watters regularly points out, this history pre-dates digital computers. “Personalisation at scale” is the holy grail of education technology, seemingly promising better learning with fewer resources. Education can be portrayed as an engineering problem, with the notion of “learning science” becoming prevalent and indeed key to some major corporations’ strategies and marketing literature.

The market is full of marketing, of course. Personally, I’ve become convinced of the promise of better analytics – providing aggregated and digestibly presented data to inform the decisions of students, teachers, parents and administrators. After all, the markbook (aka gradebook in the USA) has long been a key tool of the teacher in its paper and pencil form. However, I am instinctively sceptical the moment analytics become “predictive” or software starts to recommend or even enforce paths of action to/on people who are not empowered to understand, question or circumvent them. I am particularly dubious if a commercial vendor does not wish to explain the theory behind a software algorithm, and/or claims wide, generic application, and/or does not wish for their claims of learning improvement to be subjected to rigorous independent evaluation.

My knee-jerk reaction is clearly questionable. There will never be any clear boundaries between analytics, recommendations, or artificial intelligence – the moment you ask a question of data and choose how to present its results, you have prioritised that line of enquiry over many others. Some vendors have persuaded me – often via an enjoyable debate – that their algorithms and user experiences work effectively, but only for specific aspects of learning (for example, in delivering and reinforcing factual knowledge, but not in teaching critical thinking). There is thoughtful work being done on student retention, and interesting research taking place on evaluating adaptive technologies.

My concerns remain, however, based on the following:

  • Context is crucial in effective learning – and there are so many different contexts. There are plenty of commentators who talk as if a tool or technology is context-agnostic – a “magic pill”. Yet I have had murmurs of recognition pretty much every time I discuss what I call the “carrot and piece of string” issue: a truly great teacher can probably deliver a great lesson on any subject with a carrot and a piece of string, but a poor teacher will struggle even when equipped with every fancy tool available. This is why thoughtful implementations of technology in education focus on the contexts in which effective learning happens with the provided tool, and almost invariably include training (perhaps for both teachers and learners), evaluation and reflection as part of the rollout.
  • Computers can never capture all of the data which go into learning. In the past I have caricatured this as “the breakfast problem”: there was some research some years ago in the UK which (perhaps unsurprisingly) indicated that kids who didn’t have breakfast learnt worse during the day. I haven’t yet come across a system which asks kids about their diet, but even if one did capture this relatively simple data point, it would be impossible to gather the totality of the experience the child has brought to school that day and therefore to make the absolutely best recommendations for what and how (s)he should be learning. A teacher can’t capture all the data either, but they have many more things to work with, most importantly human empathy.
  • Misleading claims of scientific amorality. All algorithms are written by humans; and education is a world particularly freighted with values. Vladimir Putin may view – and teach – the recent history of Syria rather differently to me. The notion of “learning science” can be used to imply fixed, unarguable goals as much as it can be used to imply a critical, evidence-based approach to acknowledging the complexities of understanding and delivering “good” education.
  • Crass comparisons between education and other sectors, sometimes by people who should know better. Education is not retail or entertainment. To take one example, learning recommendations are sometimes compared to Amazon’s “you bought this, so you might want these things other people like you bought” functionality. Retail is binary, unitary and effectively irreversible. You bought something or you didn’t. (Yes, you can return an item, but you still bought it). Learning is none of these – you can partially understand a concept, you may need to learn other things before you can master it, and you can forget it too (just ask me to try and explain calculus).

Discussions of the role of AI in learning have been given fresh energy, complexity and relevance by the dizzying advance of technologies which can be described with the general term “AI”, and by the extreme, often dystopian, predictions for the future which have been inspired by this rise (cue Martin Ford and Harari). Deep learning, NLP, expression recognition et. al. may, it seems, lead to a future where there are many fewer jobs, irrevocably deep societal divides, or even computers which are more intelligent than humans and present us with an existential threat.

Written (and it seems extensively debated) by a panel of genuine experts, the Stanford report is a useful counterweight to the prophets of doom. According to the report,

“Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers.”

Frankenstein, mercifully, seems some way off. But the very next sentence in the text shows its nuanced approach, another impact on education, and how we all now have to step up:

“At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly.”

Note that there is a deliberate choice not to say that there will be less jobs – there is a clear acknowledgement of the uncertainty around this matter elsewhere – but it is clear that jobs will be different. Here and implicitly threaded through much of the document is a highlighting of choice – the decisions that we have to make, collectively and individually, as these technologies become more prevalent and more powerful.

What does this mean for those of us that work in creating new educational products, services and companies, for those of us that research, teach and/or manage in educational contexts, for learners, or for the interested citizen? I’d like to see the following emerging:

  • Transparency from education software developers about the inputs and outputs used and produced by their code. A comparison here would be the approach taken by Android and iOS to contact information, location data, or Facebook posting – the user has to authorise apps to access external data and functionality, and therefore engages with what the software is doing behind the scenes.
  • Digital education products and services which actively include and react to input from their users as their algorithms produce recommendations – empowering teachers, learners, administrators, parents, and more to influence the real-world results (and potentially even improve the software). The Stanford report is worth quoting again here:

“Design strategies that enhance the ability of humans to understand AI systems and decisions (such as explicitly explaining those decisions), and to participate in their use, may help build trust and prevent drastic failures. Likewise, developers should help manage people’s expectations, which will affect their happiness and satisfaction with AI applications. Frustration in carrying out functions promised by a system diminishes people’s trust and reduces their willingness to use the system in the future.”

  • A marketplace for personalised learning algorithms based on an open operating system for education, so that software can compete on the basis of its learning outcomes in particular contexts rather than other barriers to market entry. I’ve written at length about this in my last blog post.
  • Investment and growth in the ecosystem of companies and other organisations aiming to up-skill, re-skill and motivate the widest possible range of people as we face the “disruption” in the job market due to AI anticipated by so many forecasters. (This is a key area of investigation for me right now as I explore my next paid job, so please get in touch if you are working on something interesting!)
  • A sophisticated ongoing debate around the deployment of AI in education, particularly amongst those developing new tools. Conference organisers please have a think!

In short, I believe in a “critical optimism” approach. AI in education is critical – it is essential for our future. We also need to be critical – we must carefully examine and challenge the actions of ourselves and others, including major organisations. And we need to be optimistic as this exciting set of technologies reveals its possibilities and pitfalls.

Note: The Stanford report is here. Audrey Watters is here. However, nearly every sentence in this piece could have had a reference to evidence or research, so I’ve left all other references out to make for clean reading. Please reach out to me via Twitter @nkind88 or in the comments, and I’ll post links.

Image credit: yumikrum on Flickr (CC-BY 2.0)