Average full article read:
Last year, when I wrote “Two ways to use Azure Machine Learning in education” and “Making machine learning in education easier for every day users”, I imagined a future where the power of the Project Oxford services could help education institutions with scenarios like face recognition (Who’s in my classroom today? Is anybody smiling in this picture?), speech processing (Who’s speaking in the lecture now?), visual analysis (Does this image contain inappropriate material?) and language understanding (When you said run, do you mean run a program, or run down the street?). I didn’t focus on specific scenarios, like how it could help with accessibility in education, or more generally how machine learning and artificial intelligence can help improve education.
In the last year, our focus has mainly been on helping developers use these services to build new applications and web services – and so you may not have seen rapid change in the way that you use this kind of technology to help you achieve your goals. And we’re continuing to release new services and ways of interacting with them. Last night we announced our Cognitive Services – a set of tools for developers that helps them to build apps and services for users that enable natural interaction, and add intelligence to what they are building.
And I know that all sounds very geeky…artificial intelligence, machine learning, cognitive services…so let me put it differently and show you what it will mean for you in the future!
How about if instead of replacing people with technology, we got better at helping you to get things done? Not replacing you, but helping you be better at what you are doing and to do more of it. In the case of teaching, can we augment the creativity, insight and emotional intelligence that teachers excel at, with what machines excel at – like fast computation, pattern recognition from large data-sets and a vast capacity for data storage and retrieval?
Let me show you two examples of what we are achieving in other areas, because they sparked off great ideas of what could be possible in education soon.
The Seeing AI app
The first is the Seeing AI app, using these intelligence services to help Saqib create a research project that helps people who are visually impaired or blind to better understand who and what is around them.
Watching this gave me all kinds of interesting ideas for how we can improve accessibility in education – what about you?
The second example is holoportation, a Microsoft Research project that uses a new style of 3D capture technology, and the Microsoft Hololens, to create an immersive holographic experience in 3D, within a physical space.
Watching this also fired off a buzz of new ideas – everything from distance learning, to remote mentoring, to being able to re-create historical moments and putting your students right in the middle of them.
Find out more
If you’re a developer, and you want to explore more about how machine learning generally, and specifically how our intelligence APIs work, then you can visit our Cognitive Services website. You can find out all about our vision, speech, language, knowledge and search APIs.
And if you’re slightly less technical (like me), you can play with some of the services on the web, like CaptionBot. It’s simply using the services to analyse a picture and suggest a caption. You can upload your own picture, or point it to a picture on the web. Although it’s not perfect (yet!) it shows the potential for what we might be able to do in the future – and there’s a number of different scenarios that spring to mind in education, like automatically checking, captioning (and quarantining?) images uploaded by users into your school’s systems.
We welcome your feedback
Let us know if you enjoyed this article and we'll share similar content