Meta: early indicator of Zuckerberg’s impact on education

Last week the Chan Zuckerberg Initiative (CZI) bought a company called Meta. This small acquisition is potentially seismic in its implications for the worlds of science and education.

CZI is the philanthropic vehicle for Priscilla Chan and her husband Mark Zuckerberg (I put the couple in that order because they do). After the Initiative was announced with some fanfare, there have only been a few announcements – ambitious, striking, but mostly short on detail. Chan Zuckerberg Science was announced with the intention of “helping cure, prevent or manage all diseases in our children’s lifetime.” The highly regarded Jim Shelton was appointed head of Chan Zuckerberg Education. They announced their first lead investment in edtech and employability company Andela, alongside a group of other well-known investors in the space. So far, so expected.

The Meta acquisition is different. In the company’s own words, Meta is “a tool that helps researchers understand what is happening globally in science and shows them where science is headed.” Facilitated by the move towards Open Access in scientific publishing, the venture-backed company was selling Artificial Intelligence-driven insights about the state and potential trajectory of research to publishers, pharma companies, and others.

Now, Meta’s toolset is going to be totally free, and the company has a clear mission to open up its technology to humans and machines, and collaborate with the research community to make itself better. The for-profit drive of the company has been up-ended, towards one of impact only. The venture investors are going to take their financial exit and leave (presumably having made profits they feel happy with).

This is unprecedented as far as I’m aware. I’ve never known a philanthropic organisation acquire a venture-backed startup and mould it to its own ends. Invest, make grants, lobby – all familiar stuff from the activities of the Bill and Melinda Gates Foundation and others. Acquire – no.

The effects of this are, in my view, many. There are relatively parochial issues about what happens to the world of Scientific, Technical and Medical publishing, which I spent several years analysing with my team when at Holtzbrinck. Market leader Elsevier’s apparent strategy of building competitive advantage through amassing proprietary data and the resulting algorithms looks challenged. Clarivate Analytics (previously part of Thomson Reuters) will be worried about the long term viability of its Web of Science product.

More interesting are the broader consequences and issues. I should say first that I feel that CZI is doing something really interesting and worthwhile here, and genuinely breaking new ground. But there are wrinkles.

  1. CZI just changed a commercial market. The value chain and power relationships amongst players in scientific publishing are altered substantially – not least as this is probably just the beginning of what CZI might do, given their financial firepower. Investors and shareholders will be worried and may hold off making spending decisions. Who knows what will be changed next?
  2. There is a risk that CZI actually reduce choice and innovation in the available algorithms and tools which analyse the corpus of scientific knowledge. It’s very difficult to compete with free. I suspect they have thought of this and may well want to put money into a range of approaches, but it’s worth pointing up.
  3. Venture capitalists may now have a new exit route to add to trade sale (i.e. sale to another company) or IPO: sell to CZI if your idea is clever enough and aligns with their theses. I wonder how soon I will see this in a fundraising document.
  4. There’s a clear issue of democratic accountability here. Without malice or deliberate intention on anyone’s part, we have ended up in a situation where two people can spend huge amounts of money according to their definition of “good”, and consequently affect a major component of our society. I suspect CZI are thoughtful and well aware of this, and consulted with the scientific community before making their acquisition. Nevertheless, this is another slightly scary illustration of how polarised our world is becoming.

So what about education? All of these issues are directly transferable. I would now be very surprised if Jim Shelton and his team don’t make some major moves which re-balance power in education ecosystems, particularly in the USA but likely more broadly. Investors and existing players need to take note of whatever signals we get from San Francisco.

Yet in my view CZI need to ensure they don’t transfer their model wholesale. Science has a clear, globally accepted value system based on “standing on the shoulders of giants” – rigorous process, peer review, publication. Teaching and learning are different. As I (and many others) have written before, establishing “what works” in education is a highly subjective, value-laden exercise. CZI will need to show how they are fostering innovation and impact in education with a diverse range of cultures and contexts – or unapologetically and publicly adopt a clear set of values, and argue passionately for them. They need to do these whilst avoiding being cast as latter-day cultural imperialists and facing a backlash. This is a tall order in a “post-truth” world where Zuckerberg has already struggled with issues of editorial responsibility in his day job at Facebook. The team have awe-inspiring ambition, power, and potential, which they have only just started to deploy. Things just got interesting.

http://meta.com/#letter

Education and Artificial Intelligence – critical optimism

We shouldn’t believe the hype or succumb to dystopian panic – it is vital to be simultaneously positive and reflective

Artificial Intelligence (AI) seems to be at peak media frenzy right now, with its effects on education frequently in the mix. My own reading in the last few days has included the first published output from Stanford University’s One Hundred Year Study on Artificial Intelligence, news of the American technology giants working together to address societal concerns around AI, and a review of Yuval Noah Harari’s new book Homo Deus, which follows on from his highly influential Sapiens to examine the future of mankind in the light of new technologies.

The attention is not new, even if the intensity and pitch seem higher. Education has long been touted as a sector where advanced machines can make a real difference – and as Audrey Watters regularly points out, this history pre-dates digital computers. “Personalisation at scale” is the holy grail of education technology, seemingly promising better learning with fewer resources. Education can be portrayed as an engineering problem, with the notion of “learning science” becoming prevalent and indeed key to some major corporations’ strategies and marketing literature.

The market is full of marketing, of course. Personally, I’ve become convinced of the promise of better analytics – providing aggregated and digestibly presented data to inform the decisions of students, teachers, parents and administrators. After all, the markbook (aka gradebook in the USA) has long been a key tool of the teacher in its paper and pencil form. However, I am instinctively sceptical the moment analytics become “predictive” or software starts to recommend or even enforce paths of action to/on people who are not empowered to understand, question or circumvent them. I am particularly dubious if a commercial vendor does not wish to explain the theory behind a software algorithm, and/or claims wide, generic application, and/or does not wish for their claims of learning improvement to be subjected to rigorous independent evaluation.

My knee-jerk reaction is clearly questionable. There will never be any clear boundaries between analytics, recommendations, or artificial intelligence – the moment you ask a question of data and choose how to present its results, you have prioritised that line of enquiry over many others. Some vendors have persuaded me – often via an enjoyable debate – that their algorithms and user experiences work effectively, but only for specific aspects of learning (for example, in delivering and reinforcing factual knowledge, but not in teaching critical thinking). There is thoughtful work being done on student retention, and interesting research taking place on evaluating adaptive technologies.

My concerns remain, however, based on the following:

  • Context is crucial in effective learning – and there are so many different contexts. There are plenty of commentators who talk as if a tool or technology is context-agnostic – a “magic pill”. Yet I have had murmurs of recognition pretty much every time I discuss what I call the “carrot and piece of string” issue: a truly great teacher can probably deliver a great lesson on any subject with a carrot and a piece of string, but a poor teacher will struggle even when equipped with every fancy tool available. This is why thoughtful implementations of technology in education focus on the contexts in which effective learning happens with the provided tool, and almost invariably include training (perhaps for both teachers and learners), evaluation and reflection as part of the rollout.
  • Computers can never capture all of the data which go into learning. In the past I have caricatured this as “the breakfast problem”: there was some research some years ago in the UK which (perhaps unsurprisingly) indicated that kids who didn’t have breakfast learnt worse during the day. I haven’t yet come across a system which asks kids about their diet, but even if one did capture this relatively simple data point, it would be impossible to gather the totality of the experience the child has brought to school that day and therefore to make the absolutely best recommendations for what and how (s)he should be learning. A teacher can’t capture all the data either, but they have many more things to work with, most importantly human empathy.
  • Misleading claims of scientific amorality. All algorithms are written by humans; and education is a world particularly freighted with values. Vladimir Putin may view – and teach – the recent history of Syria rather differently to me. The notion of “learning science” can be used to imply fixed, unarguable goals as much as it can be used to imply a critical, evidence-based approach to acknowledging the complexities of understanding and delivering “good” education.
  • Crass comparisons between education and other sectors, sometimes by people who should know better. Education is not retail or entertainment. To take one example, learning recommendations are sometimes compared to Amazon’s “you bought this, so you might want these things other people like you bought” functionality. Retail is binary, unitary and effectively irreversible. You bought something or you didn’t. (Yes, you can return an item, but you still bought it). Learning is none of these – you can partially understand a concept, you may need to learn other things before you can master it, and you can forget it too (just ask me to try and explain calculus).

Discussions of the role of AI in learning have been given fresh energy, complexity and relevance by the dizzying advance of technologies which can be described with the general term “AI”, and by the extreme, often dystopian, predictions for the future which have been inspired by this rise (cue Martin Ford and Harari). Deep learning, NLP, expression recognition et. al. may, it seems, lead to a future where there are many fewer jobs, irrevocably deep societal divides, or even computers which are more intelligent than humans and present us with an existential threat.

Written (and it seems extensively debated) by a panel of genuine experts, the Stanford report is a useful counterweight to the prophets of doom. According to the report,

“Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers.”

Frankenstein, mercifully, seems some way off. But the very next sentence in the text shows its nuanced approach, another impact on education, and how we all now have to step up:

“At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly.”

Note that there is a deliberate choice not to say that there will be less jobs – there is a clear acknowledgement of the uncertainty around this matter elsewhere – but it is clear that jobs will be different. Here and implicitly threaded through much of the document is a highlighting of choice – the decisions that we have to make, collectively and individually, as these technologies become more prevalent and more powerful.

What does this mean for those of us that work in creating new educational products, services and companies, for those of us that research, teach and/or manage in educational contexts, for learners, or for the interested citizen? I’d like to see the following emerging:

  • Transparency from education software developers about the inputs and outputs used and produced by their code. A comparison here would be the approach taken by Android and iOS to contact information, location data, or Facebook posting – the user has to authorise apps to access external data and functionality, and therefore engages with what the software is doing behind the scenes.
  • Digital education products and services which actively include and react to input from their users as their algorithms produce recommendations – empowering teachers, learners, administrators, parents, and more to influence the real-world results (and potentially even improve the software). The Stanford report is worth quoting again here:

“Design strategies that enhance the ability of humans to understand AI systems and decisions (such as explicitly explaining those decisions), and to participate in their use, may help build trust and prevent drastic failures. Likewise, developers should help manage people’s expectations, which will affect their happiness and satisfaction with AI applications. Frustration in carrying out functions promised by a system diminishes people’s trust and reduces their willingness to use the system in the future.”

  • A marketplace for personalised learning algorithms based on an open operating system for education, so that software can compete on the basis of its learning outcomes in particular contexts rather than other barriers to market entry. I’ve written at length about this in my last blog post.
  • Investment and growth in the ecosystem of companies and other organisations aiming to up-skill, re-skill and motivate the widest possible range of people as we face the “disruption” in the job market due to AI anticipated by so many forecasters. (This is a key area of investigation for me right now as I explore my next paid job, so please get in touch if you are working on something interesting!)
  • A sophisticated ongoing debate around the deployment of AI in education, particularly amongst those developing new tools. Conference organisers please have a think!

In short, I believe in a “critical optimism” approach. AI in education is critical – it is essential for our future. We also need to be critical – we must carefully examine and challenge the actions of ourselves and others, including major organisations. And we need to be optimistic as this exciting set of technologies reveals its possibilities and pitfalls.

Note: The Stanford report is here. Audrey Watters is here. However, nearly every sentence in this piece could have had a reference to evidence or research, so I’ve left all other references out to make for clean reading. Please reach out to me via Twitter @nkind88 or in the comments, and I’ll post links.

Image credit: yumikrum on Flickr (CC-BY 2.0)

Open Educational Resources (OER) and the challenge of choice

Today I read in EdSurge, an educational technology newsletter and site, that “Menlo Park, Calif.-based nonprofit Open Up Resources launched today with $10 million in foundation funding“. The company is riding the Open Educational Resources wave which seems to be particularly high at the moment. Other buzz has included the release of a recent Babson report highlighting that “most higher education faculty are unaware of Open Educational Resources”, the US government’s #GoOpen campaign, Amazon Inspire, and more.

Broadly speaking, Open Educational Resources (usually shortened to OER) are educational materials which are free at the point of use, usually only in digital form. An often-used more restricted definition only includes resources using components (e.g. text, media) which can all be freely remixed and reused by anyone to suit individual learning contexts.

I can’t count the number of times I have sat in conferences or bars or meeting rooms and debated whether or not OER will be (or should be) the death of the publisher. Generally I feel that this is both a dull conversation (publishers can evolve) and missing the more important issues here (see below).

Consciously or unconsciously, OER development is a decision about costs and where they land. There is always a cost in developing educational resources – even a simple worksheet uploaded by a teacher onto a sharing website has a cost in terms of time which could have been deployed in another way (for example on planning a new lesson, or childcare, or going to the pub). Foundations in the USA such as the William and Flora Hewlett Foundation deploy substantial funding towards the development of both materials and OER organisations. Some of these organisations (e.g. Open Up, from a scan of their website) seek sustainable business models through the sale of add-ons such as teacher training or printed versions.

There used to be a common knee-jerk reaction from publishers that OER could never be of decent quality. Ignoring the evidence of great materials out there, this is a priori not true – OER development is just deploying many of the skills and tools of quality publishing (pedagogy, design, selection, aggregation, proof reading) with another business model. Some will be excellent, some good, some bad.

Sustainability of OER materials is a more serious matter, but equally a red herring. Many will have a very long half-life (given my degree in English Literature, I think about commentaries on Shakespeare from the eighteenth century which are read today. Digitize them, and they don’t need updating). But time-sensitive materials such as those covering US politics, or the state of Britain in Europe (sad face) need almost daily review, and can quickly become irrelevant or positively misleading if left alone. However, there are plenty of ways to ensure that this management takes place.

The real – and complex, no-easy-answers – issue around OER is that of choice and innovation. A market for textbooks served by for-profit publishers and software developers is a government policy decision, not an immutable monolith – and in many countries, such markets don’t exist. Whilst in the UK funding is given to schools to spend on the resources they feel they need for their particular context, in some Indian states (example) one textbook per subject is developed by the government and provided to schools as a fait accompli. Sometimes there is a hybrid model with a restricted list of approved publications, as with the Brazilian government’s PNLD programme or adoption states in the USA.

Governments are making their own judgements here about what’s appropriate. On the one hand, developing a single textbook has the merits of economies of production scale and of control (this last aspect being particularly attractive in authoritarian regimes). On the other, a completely free commercial market with funding devolved to schools means that more money is likely spent on developing materials than the government might have centrally (because some producers will spend on unsuccessful materials and lose money), markets can often foster innovation through competition, and schools have much greater choice. However, funding goes to for-profits, which some find uncomfortable.

OER are equally a double-edged sword. If they are funded by a foundation or uploaded gratis by teachers, that’s effectively new money coming into the system funding fresh, hopefully innovative and likely more flexible materials – at first sight, not much to dislike here. But there is a potentially restrictive effect on choice. For-profit organisations obviously won’t compete with free, and schools will usually choose to redirect restricted budgets away from educational resources if quality gratis alternatives are available. There is an extreme potential resulting scenario in which foundations become the main funders of educational resources used in schools. No educational materials are free of political or moral judgements. So, if this situation looks likely, it will become essential somehow to ensure that there is a really good choice of OER so that diverse viewpoints are available. Any suggestions?

Side note: there’s a point about remixing here which is often missed. It would be quite possible for governments to mandate schools only to buy materials which can be remixed and publishers would have to follow suit. The challenge is that many educational resources are better with copyrighted content which simply can’t be licensed to allow people to re-mix them (for example, clips from Hollywood films, or images of major artworks). 

Image by https://opensource.com

 

An open operating system for education, the threat of Big Tech, and how progress might happen

In my twenty years of working in education technology around the world, I have become convinced of the compelling case for an “open operating system for education”. If digital tools of all sorts were able seamlessly to exchange data without human intervention, but according to clearly defined rules governing privacy and permissions, the potential would be hugely exciting. Public, not-for-profit and commercial players in the system could compete and collaborate on the basis of how well they improve teaching and learning, not their access to proprietary data. There would be room for many more experiments and innovations from large and small organisations, as an open source algorithm written in a back room could be compared to $100m interventions from mega-corporations. We would have a much better chance of knowing what worked, why, and in what context. Governments, administrators, teachers, learners and their parents could better control and analyse the data they create. The ecosystem could be fairer, more transparent and vibrant.

We’re certainly not at that point today. Such a system would rely on a set of comprehensive, widely adopted, open standards for how digital educational tools should interoperate. Vitally, these standards would have to be agnostic to commercial model or local cultural norms (for example, around privacy). They would necessarily support as many choices as can be thought of. Not because all choices are good choices in my or your opinion, but because decisions in the value-laden world of education are usually context-specific, culturally debatable, and often develop over time. Texas would make different choices to Turin or Tianjin, and these all need to be upheld.

There are a lot of standards out in the world, but they aren’t complete and they often focus on very specific American contexts. This isn’t the place to examine the alphabet soup of LTI, LRMI, SCORM, QTI, Caliper, OneRoster, ed-Fi and more, but suffice to say not all of the people developing all the standards talk to each other or have the resources to consider the world outside the USA. Additionally, the standards aren’t universally supported by tools and are sometimes resisted by vested interests. Indeed companies such as Clever are working to help sort out the spaghetti of proprietary systems by offering a (proprietary) interoperability layer. Plenty of data is being created by some fascinating organisations– from AltSchool to JISC to Zaya Learning – but their full potential is untapped.

For me, one issue now dominates whether or not we will make progress towards this open operating system. It is the role of the huge technology companies which are increasingly reaching into every part of our lives – Amazon, Apple, Google/Alphabet, Facebook, Microsoft, Alibaba, Tencent and Baidu.

The tech titans are on a mission in education. Google Classroom. Recently, Microsoft Classroom too. And Apple’s additions to iOS 9.3 so that it works better in schools. And Amazon’s recent release of Inspire. And Alibaba’s investment in TutorGroup. And a raft of other developments over the last few years as education technology has once again become a focus for investors, corporations and policymakers across the world.

Regrettably, these huge companies don’t seem to have considered some of the long-term consequences of their moves (or possibly don’t care about them). There is a clear and present risk that their actions will stifle innovation, create unintended controversy, and (most importantly) reduce the impact that we can all make as stakeholders in the world of learning.

Some background. Taking just the US players, Alphabet/Google, Amazon, Apple, and Microsoft are each pursuing a “Trojan Horse” approach in the small-for-them global education market: they’re loss-leading in line with their wider strategies.

  • Google continues to improve the functionality of its free Classroom tool for schools, universities, teachers and students. Classroom is part of the Apps for Education suite and extends it so that users can not only send and receive email and work on documents together, but assign work to classes, collect students’ grades, and discuss homework. (In edtech jargon, it’s a free LMS). Chromebooks – which run a Google operating system – are now the bestselling devices to US schools, likely as they are cheap and easy to manage. They require a Google account. All of this pulls in data and users: Google’s raison d’être.
  • For many years, Microsoft has made Office free for schools and universities. They hope that this will ensure that people use their productivity tools out of habit and familiarity for the rest of their lives. Newly-launched Microsoft Classroom – very similar to Google’s offering, if a few years behind the curve – is part of free Office 365 for Education.
  • Apple is fundamentally a hardware-driven business. It likes to sell iThings and Macs and for people to become used to them (and so keep buying more hardware, apps and content). So, selling beautiful devices into education is good for Apple on all fronts – and if this requires some tweaks to iOS, so be it. Apple are even prepared quietly to purchase an edtech startup to further their strategy in education.
  • Amazon wants to be the distributor of as much as possible. Digital content especially – films, TV programmes, ebooks, audiobooks. Education resources have thus far been out of their reach as publishers and others have their own direct-to-schools sales staff. So they have bought a company which creates mathematics content, created a site for teachers to discover and share resources, and offered “one stop shop” solutions to cities and (allegedly) entire nations.

All, you may say, fair enough: isn’t it brilliant that teachers and learners across the globe get free or reduced price tools from these big players which let them do their jobs more efficiently and cheaply? And surely part of the companies’ motivations is about helping out the world of education?

Yes and Yes. But.

But – data. Google, Microsoft, Apple and Amazon lock up the data created in their own proprietary systems. They have absolute control over who gets access to it, for what purpose. Some access to the rich seams of information created by teachers, learners, administrators, content, applications (and more) is possible, but right now none of these solutions abide by any widely adopted open standards for data exchange. Furthermore, it’s rarely entirely clear who owns the data and what it can be used for, by whom.

Right now, this isn’t too much of a problem. Big tech’s offerings are relatively immature and little used in comparison to their competitors such as Blackboard, Desire2Learn’s Brightspace, Instructure Canvas, Moodle, Edmodo and many more. The competing offerings support some standards with varying degrees of usability, and in many cases do the best they can.

However, the scope and functionality of the Google, Apple and Microsoft solutions are inexorably growing. These tools are free at the point of use and are backed by relatively trusted brands with immense marketing clout. This means that such software is easily – and sometimes unthinkingly – adopted in the educational context, where budgets never go far enough. There are cases of entire countries’ education systems deploying Office 365 for Education. Furthermore, it seems that many adopters are sleepwalking right now – at a recent educational conference I asked three informed and intelligent teachers on a panel whether or not they were concerned that Google was in control of their gradebook data. Their responses implied – “Google Classroom is free – what’s not to like?”

In short, as adoption of these gratis tools unavoidably accelerates, educational data and access to it may become increasingly controlled by a small powerful group of private companies based in the USA. Google Classroom et. al. are of particular concern not because I think the big technology companies will necessarily do anything bad – I’m no conspiracy theorist. But we need to ensure that at least the majority of data created by education is public and personal rather than commercial and proprietary, and the track record so far of the big tech companies in opening up data is – shall we say – mixed.

Public, open standards are gnarly, painful to create and maintain, glamourless, and invisible if they work well (nobody thinks about TCP/IP as they “like” something on Facebook). But they are sometimes essential for the health and development of the system which they underpin. I for one firmly believe that educational data is one of those instances.

So what to do? It would be nice to think that Amazon, Microsoft and their peers could see the public benefit of collaboration. Perhaps uniquely, they have the capabilities to knit together the current patchwork quilt of standards into a workable whole and seamlessly support them (they have just about agreed on HTML). It’s debateable whether or not open standards would harm their business models – whilst “lock in” would clearly be reduced, users might be more prepared to use all of their offerings sequentially or even simultaneously – a teacher using Google Classroom on an Apple Mac to access a quiz made in PowerPoint bought through Amazon Inspire. Arguably, such openness would accelerate the adoption of their free tools even more.

Sadly, there is currently little evidence that these issues are even on the agenda of the technology giants. As with other markets such as music, at some point we may find ourselves presented with a fait accompli which seems beyond the control of governments. Control may already have slipped away.

In my view we need to work to avoid this. In the absence of Big Tech stepping up to the plate, plenty of bottom-up and top-down initiatives could help to ensure educational data is properly available and managed. Governments and groups of schools and universities should only accept contracts where there are clear assertions of data ownership (ideally by the learners themselves) and access (for both learners and institutions). All market players could accelerate the existing work which is going on by actively engaging with standards initiatives and encouraging them to work together. As teachers, learners, parents and citizens we all need to hold vendors and institutions to account. If we don’t, education will likely be the poorer.

For references and links, contact me in the comments or on Twitter @nkind88