Warning: Outdated Content
This lab is from a previous version of CS 125. Click here to access the latest version.
Android and APIs
The goal of this lab is to prepare you to begin work on the Android set of MPs (MP6 and MP7) and your final project. You’ll prepare your development environment by installing Android Studio, build and run a simple Android app using the Android emulator, and familiarize yourself with the Microsoft Computer Vision Application Programming Interface (API).
1. Android Environment Preparation (45 Minutes)
In this section we’ll have you set up Android to prepare your computer to build Android apps and complete the next several MPs.
1.1. Install Android
Follow this tutorial to download and install Android Studio. Make sure to follow the instructions carefully, and ask the course staff for help if you need it.
1.2. Using the Android Emulator
Make sure that you complete the this portion of the tutorial. To work on MP6 and MP7, you will need a working emulator—even if it runs a bit slowly. Again, work with the course staff to try and complete this portion of the lab.
Once you have the "Hello, world!" application running, try to make the following modifications to this simple app:
-
Change the text that is shown to "Hello, CS 125!"
-
Change the size of the text
-
Change the position of the text
-
Add some other element to the user interface
1.3. Using the Emulated Camera
As the final challenge for the Android portion of the lab, try to get the camera working on your emulated device. Consult the screencast above for instructions on how to do this. You can test whether your emulated camera is working by using the built-in Android Camera app. If possible, set up your emulated camera to use your computer’s built-in webcam, assuming it has one.
2. Introduction to APIs (45 Minutes)
API stands for application programming interface. The better you become at understanding and using existing APIs, the faster and more easily you will be able to build powerful apps and programs.
2.1. What is an API?
In the most general terms, an API is "a set of clearly defined methods of communication between various software components". For this lab and MP6 and MP7, we are particularly interested in the subset of remote web-based APIs. These APIs are:
-
remote: they are accessed over the internet, and
-
web-based: they are accessed using standard web protocols.
One way of thinking about APIs is that somewhere, in some data center full of computers, is a computer that will run certain functions for you if ask it nicely. To use that functionality you don’t have to know where that computer is, or how the function works. You just have to learn how to get the data to the API in the format that it expects, and understand how to interpret the results.
An API like a collection of functions, with each function providing different functionality. Using an API is very much like calling a function—except that to use a remote API we need to figure out how to get the data to the API and get the results back.
2.2. Microsoft Cognitive Services
For this lab and for MP6 we’re going to focus on using a portion of the Microsoft Cognitive Services API. It provides a variety of features focused on enabling interaction with users in intuitive ways—processing vision and speech data, and providing intelligent search, recommendation, and natural language services.
Why should you use this API? Well, imagine that you want to extract information from photos taken by users of your new app. You have two options:
-
Spend several years mastering the sophisticated machine learning and computer vision algorithms required to implement your own solution…
-
…or, use the Microsoft Cognitive Services Computer Vision API.
By using APIs you are truly able to stand on the shoulders of giants. Don’t waste your time solving problems that you aren’t interested in solving! Chances are that somebody else is waiting for you to solve the problem that you want to solve. Get on to that and focus on changing the world, not re-solving problems that others have already solved.
2.3. Using the Cognitive Services Computer Vision API
To get a sense of what the Cognitive Services Computer Vision API can do, experiment with some of the examples on this page. For each feature, upload your own images to get a sense of what kind of capabilities this API has.
In particular, MP6 will focus on the image analysis feature: the first one listed on this page. Go through the sample images and see if you can understand the results returned by the API:
-
Are the results accurate?
-
In cases where they are inaccurate, can you figure out why?
-
What kind of information is reported by the API?
-
What parts of it are you most surprised by and why?
2.4. Gaining API Access
Like many remote APIs, gaining programmatic access to the Microsoft Cognitive Services API in your app requires a key. Keys allow API provides to control who uses their services, and allows providers to begin to charge API users if their usage exceeds various thresholds.
Happily, many remote APIs provide free access for usage that is more than sufficient to develop and test your own programs. And, as a student, you also have access to many free programs offered by companies to introduce you to their APIs and services. So you can try out a lot of things without paying a dime. Of course, once your app built using the Microsoft Cognitive API takes off and is being used by one million people, you’ll need to start shelling out some money to Microsoft. But let’s get there first.
So the first step to gaining access to the Cognitive Services API is to get an API key. You can follow these instructions or watch the screencast above.
Microsoft API keys are only valid for specific regions, so note that when you generate your key. You’ll need both the key and the region in just a minute.
2.5. An Example Application
Finally, we’re going to use the API key that you obtained to start examining the output of calls to the Microsoft Computer Vision API.
Note that we are using IntelliJ again, not Android Studio, since this is a standalone Java application, not an Android app. We’ve set up an Lab 9 GitHub repository containing an IntelliJ Project that’s correctly configured for Lab 9. Getting access to it is similar to how you imported MP0. But you have to fork our repository first. If it’s not obvious how to do that, try following these instructions.
With this as your starting point, try to get to the point where you can make valid calls to the Microsoft Computer Vision API. Once you have accomplished that, try to adjust the URL to investigate different images.
4. Before You Leave
Don’t leave lab until:
-
You have installed Android Studio
-
You have been able to successfully run our Android "Hello, world!" application using the Android emulator
-
You have reviewed our introduction to APIs
-
You have obtained your key for Microsoft’s Cognitive Services API
-
And so has everyone else in your lab section!
If you need more help, please come to office hours, or post on the forum.