Design for AI

In-depth interviews with experts and discussing topics to learn how to design machine learning to be usable by everyone, and help define the space where Machine learning intersects with UX. Covering UX/UI design, development advice, and PM guidance for all things AI.

http://www.designforai.com

subscribe
share






episode 3: 3-How to use privacy to improve the UX of your AI apps


Episode 3

I talk about how to get privacy to improve the UX through federated learning.

  • Google announcement for Federated learning
  • Apple announces privacy for their AI models using Differential Privacy

Music: The Pirate And The Dancer by Rolemusic

Transcripts

Hello and welcome to Design for AI
Im Mark Bailey, Welcome to episode 3

Today we will be talking about federated learning.
There is a good chance some of you are wondering what it means,
don’t worry it’s still considered a pretty new topic in AI.
Even the word isn’t pinned down, Apple calls it ‘Differential Privacy’.
so I’ll jump right in to explaining what it is and why it’s important to UX.

The old way, or I guess I should say the normal current way,
most models store data used for machine learning
is to round up all the data you think you’re going to need + data attached to it
then all gets uploaded and stored on your servers.
This is the centralized model
There is the saying going around that data is the new oil,
because the more data you can get your hands on
then the better the accuracy is for your model.
Which means you’re at the front of the line for the gold rush,
right?…

Well, not so fast
There are problems
Some people refer to data as the new plutonium, instead of the new oil
There is a high liability for personal data
Releasing an app over the internet is global.
But, laws and regulations change by country.
The new EU privacy laws like the GDPR conflict with the laws in authoritarian countries where they want you to share all your data.
In steps the idea of federated learning
As a quick side note, I am using Google’s term federated learning,
instead of Apple’s term Differential Privacy.
Differential Privacy is a little more inclusive of making things outside of machine learning models private,
so in the interest of keeping things as specific as possible I’ll use the term federated learning
to keep things as specific as possible.
I’ve included links for both Apple and Google’s announcements in the show notes.

Anyway, it is easiest to think of it in terms of using a cell phone,
because that is where all of this got its start for both companies
On device storage is small and there is too much data to upload over a slow network
The phone downloads the current AI model.
Then it improves the model by learning from all the local data on your phone.
Your phone then summarizes the changes as a small update.
Only this small update is sent back instead of all the data.
For a non-phone example think of Tesla building their self driving cars.
Every car that Tesla is currently making records 8 different cameras every time that car is driving.
Those video feeds help to train the model Tesla is trying to create for the car to drive itself.
To date Tesla has sold over 575,000 cars since 2014 when they added the cameras needed for self driving.
multiple 575,000 by 8 then multiply that by the number of miles all those cars drive.
It becomes obvious that is just too many video feeds to send over their wireless network
much less to record and store on central servers somewhere.
More importantly, no one wants everywhere they have driven,
and every mistake they made to come back to haunt them.
federated learning allows Tesla to push the model out to their cars.
Let the model be trained by data collected in the car,
then the training corrections are sent back to Tesla without needing to send hours upon hours of video.
Privacy and data bandwidth are preserved.
As a side note, Tesla does upload some video of a car’s driving for things like accidents.
We talk about outliers and making which parts you keep private later.

So, federated learning allows for global results from local data.
Basically train on the local device and send aggregated results back
It allows to keep the sensitive data on device
and if you can promise, and deliver, privacy to the user of an AI model
then you have taken care of one of the biggest fears users have for machine learning.
Think about it, keeping my data private is one of the biggest complaints against people wanting to use AI.
It is right up there with robots taking over the world,
If we can solve real fears now, we can start working on the science fiction fears next.
This is why it is important to UX
All the benefits of privacy for your customers,
plus all the benefits for the company of well trained models.
Of course offering privacy to your users is a selling point but what are the trade-offs?

For the drawbacks I am not going to sugar coat it.
There might be some pushback from developers because it does add an extra layer of abstraction.
There is a good chance the developers have not created a model using federated learning,
so there will be learning involved.
Also, the models created from federated learning are different from the models created from a central database because the amount data and types of data collected are usually different.

As far as the benefits
You don’t have to worry about getting sued for accidentally leaking information you never gathered.
really though the biggest benefit is usually better more accurate models which may seem counter intuitive.
Since all the data stays local you can collect more data.
Also since the model is trained locally the model is better suited for the person using it which is a huge UX benefit.
There are benefits even if your business plan keeps all of your machine learning models centralized,
instead of the models being on your customers computers or phones.
Because data is siloed instead of in one central location
It is a whole lot easier to comply with local regulations like medical
You don’t need to worry about the cost of transferring large amounts of data
It is easier to build compatibility with legacy systems since they can be compartmentalized
and you can have joint benefits by working between companies,
with each company able being their strengths to the table without revealing their data.
Still since privacy is one of the main benefits, from the UX side of it,
it is important to let people using your app know about the privacy you are offering for peace of mind.
This is not easy since machine learning is already a difficult enough topic to convey to your customers.
For example, this is one of the main selling points Apple uses for their iPhone,
that they protect your privacy is a big marketing point for them.
They are probably one the biggest users of this concept be it Differential Privacy or federated learning.
But I’m guessing that the majority of iPhone users have no clue
that most data for all the machine learning stays on their phone.
And, if Apple, the design focused company,
is having this much trouble conveying the message of one of their main selling points,
it’s obvious it is not an easy thing to accomplish.
The easiest way to convey to the user that you are keeping their privacy
is through transparency inside the app.
Show all the things using federated learning.
Break it down by which features use federated learning
Show user where the data goes, or really doesn’t go.
For example one of the limiting factors of federated learning can be turned into one of the selling points
Since federated learning needs to keep labels local,
it gives you a chance to explain why when you have people correct predictions.
For example choosing who the picture is of on your phone
or choosing which word auto-correct should have chosen.
You can let the user know,
they are doing this is to keep their own data private
Now if privacy is important to your business model,
if it is the thing you are showing as a benefit to using your app.
Then it does need to be designed into the app from the beginning.
First, I won’t go into the math involved,
but merging multi-device information can still expose privacy
You need to make sure when the app is designed that the company can’t see individual results,
only the aggregate
Next, the model, over time can also, possibly, learn identifiable info
When you design the app make sure that the model limits influence of individual devices
Another important thing you will need to pay attention to is outliers
normally you only want to be paying attention to the difference to the average
There is a difference between the global model vs personalized model
How much do you want to allow local data to alter the global model behavior?
That is a decision you need to make based on your use case.
The next big part of improving the UX is deciding how much to split your use cases into different personas
usually each persona get’s their own model
The best example I can think of is for a language model
train different models for different languages
that helps to reduce the outlier information
This is where accessibility fits in too.
Make sure not to forget it.
Since AI models try to average everything,
accessibility needs can be averaged out as outlier data.
Make sure to work any accessibility needs into specialized personas and models,
to reduce the noise for the model and get a better user experience for those with and without accessibility needs.
Outliers also influence how often the app should send back information.
Like I was talking about earlier, usually a model stores up enough information
before it sends it back, either to save on bandwidth costs or to ensure privacy.
If the app is getting a lot of outlier data though,
you probably want to want to know about it as soon as possible.
To be able to adapt the model as needed to give a better user experience.
You will need device to say when it has unusual data,
so the transfer can happen sooner.
Well thank you for listening
and I hope you found this episode interesting
I would love to hear feedback on this this topic and
which other topics you would like to hear about
To leave feedback, since this is a podcast,
use the voice recorder app on your phone,
  and make sure to give your name
then email it to podcast@designforai.com

If you would like to know how to help,
Well your first lesson in ML is to learn how to help train your podcast agent,
by just clicking subscribe or writing a positive review on whatever platform you use to listen to this podcast.

Thank you again
and remember, with how powerful AI is,
lets design it to be usable for everyone


fyyd: Podcast Search Engine
share








 September 18, 2019  13m