The PolicyViz Podcast

Economist, data visualization, and presentation skills specialist Jon Schwabish talks about data visualization, presentation skills, open data, and technology with guests.

https://policyviz.com/podcast/

subscribe
share






episode 205: Episode #205: Steve Franconeri and Jen Christiansen at VisComm


In this week’s episode of the podcast, I’m playing the recording from the opening moderated panel discussion between myself, Jen Christiansen, and Steve Franconeri at the 2021 VisComm workshop at the IEEEVIS conference. We (the workshop organizers) asked Jen and Steve to join the workshop to talk about two of the sides of the dataviz field–practitioners and researchers. What do each know that they wish the others knew? What should practitioners know about dataviz research and how can researchers incorporate practitioners’ work into their research? We explore this and a lot more in this week’s podcast.

Oh, one more thing! The PolicyViz Podcast and Better Data Visualizations book have both been nominated for the Data Literacy Awards! Please consider voting for both by heading over to the Data Literacy website.

Jen Christiansen is senior graphics editor at Scientific American, where she art directs and produces illustrated explanatory diagrams and data visualizations. She began her publishing career in New York City at Scientific American in 1996, moved to Washington, D.C. to join the staff of National Geographic (first as an assistant art director/researcher hybrid and then as a designer), spent four years as a freelance science communicator and returned to Scientific American in 2007. Jen writes and presents on topics ranging from visualizing uncertainty, to her quest to learn more about the pulsar chart on Joy Division’s Unknown Pleasures album cover. She holds a graduate certificate in science communication from the University of California, Santa Cruz, and a B.A. in geology and studio art from Smith College.

Steven Franconeri is a Professor of Psychology at Northwestern, and Director of the Northwestern Cognitive Science Program. His research is on visual thinking, visual communication, and the psychology of data visualization. He directs the Visual Thinking Laboratory, where a team of researchers explore the power and limits of your visual system, and how better design and pedagogy can help students and scientists understand and use visual representations across paper, screens, and their imagination.

Episode Notes

Jen | Twitter | Website | Scientific American | Practical Resources Google Sheet
Steve | Twitter | Website

Jen Presentation: Senior graphics designer discusses importance of scientific infographics

Paper: The Connected Scatterplot for Presenting Paired Time Series

Paper: A Model of the Perceptual and Conceptual Processes in Graph Comprehension

Paper: Increasing the Transparency of Research Papers with Explorable Multiverse Analyses

Paper: Reading the pandemic data

Paper: Arcs, Angles, or Areas: Individual Data Encodings in Pie and Donut Charts

Paper: Beyond Memorability: Visualization Recognition and Recall 

Journal: Psychological Science in the Public Interest (article from Franconeri et al. forthcoming)

Scientific American and Moritz Stefaner: Where the Wild Bees Are: Documenting a Loss of Native Bee Species between the 1800s and 2010s

Book: Making Sense of Field Research

VisComm Website

Evan Peck: Data is Personal. What We Learned from 42 Interviews in Rural America.

Information+ Conference

Lab in the While

Test My Brain

iTunes Spotify Stitcher TuneIn Google Podcasts PolicyViz Newsletter YouTube Related Episodes

Episode #53: Jen Christiansen

Episode #93: Robert Kosara

Episode #184: IEEEVIS Recap

Support the Show

This show is completely listener-supported. There are no ads on the show notes page or in the audio. If you would like to financially support the show, please check out my Patreon page, where just for a few bucks a month, you can get a sneak peek at guests, grab stickers, or even a podcast mug. Patrons also have the opportunity to ask questions to guests, so not only will you get a sneak peek at guests but also have the opportunity to submit your own questions. You can also send a one-time donation through PayPal. Your support helps me cover audio editing services, transcription services, and more. You can also support the show by sharing it with others and reviewing it on iTunes or your favorite podcast provider.

Patreon One-Time with PayPal Transcript

Welcome back to the PolicyViz podcast. I am your host, Jon Schwabish, sort of different episode coming your way this week. Now, if you didn’t know, a few weeks ago was the IEEE VIS conference. The IEEE conference is primarily an academic conference for those working in the data visualization field. There are a few workshops prior to the main conference that try to focus on some of the practitioner part of the data visualization field. So along with a few other folks, namely Alvitta Ottley, Barbara Millet, and Adriana Arcia from Columbia University, we pulled together the VisComm workshop, which is really about visual for communication, and trying to build out sort of this community where we can get this cross pollination between the academic side of the field and the practitioner side of the field. So having said all of that, the first part of that workshop that occurred on Sunday before the conference was a moderated discussion that I hosted between Steve Franconeri, who is a professor at Northwestern University, and Jen Christiansen who is the senior graphics editor at Scientific American. And the conversation was so interesting, talking about all the different ways that practitioners can learn from academics during the research, and the academic researchers could learn from practitioners in the field, that I thought I would repost an entire discussion here as an episode of the podcast. So in case you weren’t able to join the conference and watch it live or one of the recordings on the IEEE VIS YouTube channel, I thought I would just post this as a podcast episode, so you can listen to it on any of your favorite podcast providers from Stitcher to iTunes to Google Play to Spotify; or you can watch it, if you want to go back and watch it, over at my YouTube channel, you can check it out there. And so, I’m just going to replay basically that entire conversation. It’s about an hour, so it’s a little bit longer than the usual episode of this particular podcast. But there’s a lot going on, there’s a lot of great conversation that came out of that, a lot of great resources and references, all of which I have included in the episode notes to this particular podcast episode. So I hope you enjoy this conversation between myself, Steve Franconeri, and Jen Christiansen, and once again, thanks for listening to the PolicyViz podcast. Here’s that moderated discussion from VisComm 2021. 

Jon Schwabish: Good afternoon. Morning everybody. I hope you’re well. Very excited for our first session in the VisComm workshop. We’re going to have a discussion. We’ll see how many fights we can get started. We have two fantastic guests joining us today. We have Steve Franconeri who is a professor at Northwestern University; and we have Jen Christiansen who is the senior graphics editor at Scientific American. And so, the idea for our discussion today is to see or take these perspectives on data and data visualization from two parts of the field. So Steve, sort of, representing with power and finesse, the academic side of things, and then Jen from the practitioner side, the public communication side. And so, we’re going to start very simply with sort of our core question, and then, what I’m going to do is I’m going to ask Jen and Steve to sort of give their short bio so you have a sense of who they are, and I’m sure, many, if not all of you know these two folks. So I’m going to have them sort of answer the core question, and then we’re going to jump in, and I’m just going to give them, feed them a bunch of questions, and hopefully we end up with a good conversation, and maybe we’ll end up with some fights and see who can come out on top, the academics or the practitioner. So we’ll see. 

Okay, so our core first question for today is: what should the other party in the data visualization field, researcher, practitioner know about visualizing data information? So is each side missing? What don’t we know that we should know from each perspective? So what I’d like to do is just start with Jen, maybe Jen, you can just sort of introduce yourself and then maybe just one or two thoughts about what do researchers need to know about the practitioner, about the broader communication side of data visualization, and then we’ll turn it over to Steve. 

Jen Christiansen: Okay. Well, first of all, no fights, just better understanding. 

JS: I like to press the buttons and get things going a little bit. 

JC: I know. I know. That’s how you get the viewers. 

JS: Yeah, right.

JC: And the listeners. So my background is actually in scientific illustration, although for most of my career, I’ve fluctuated between being a visual journalist and a science communicator, that’s been at Scientific American and National Geographic and, as a freelancer. As Jon mentioned, I’m currently a graphics editor at Scientific American magazine. So, in our print magazine and website, we cover research ideas and knowledge and science, health, technology, the environment, and society. So there’s two of us on the graphics team, my colleague, Amanda Montañez focuses on the fast turnaround news items, and I generally focus on longer form feature stories. Sometimes we create the visualizations ourselves, but we also hire an art [inaudible 00:05:31] freelance designers. So let’s see. What would I as a practitioner like researchers to know about visualizing data and information? Well, it’s likely that most researchers are already aware of this on some level, so I’m probably oversimplifying here. But sometimes I get the impression that researchers make assumptions about the end goal of a visualization that don’t necessarily align with the practitioner’s goal, especially if that graphic is stripped out of its original context. And so, sometimes, I think that critiques that are centered on whether or not a graphic is successful or not can be misleading. So much hinges on context, so even graphics that appear in the same publication, you could argue even like the same audience, it can have wildly different goals; and one graphic may aim to present data as cleanly and clearly and as efficiently as possible; a graphic in another story might just be aiming to prompt self-reflection; another in the same publication might be more playful and serve as a form of entertainment. 

JS: So Jen, before I give Steve the mic, can I ask, what do you all use as your metric of success for visualization? So when we talk about the end goals, are there metrics that you’re using to determine whether a visualization you’ve produced has been successful? 

JC: Yeah, so at this point, it’s mostly about clicks and how long people stay on websites, and are they scrolling through a full graphic, or, are they baling on part of it; but mostly is the article that that graphic is embedded in doing well and resonating with people on social media, and on the website. But I feel as though we’ve really lost sight of how that translates to print. We used to do focus groups for that sort of thing, and I haven’t done a focus group with people in a room in years. So I feel like we’re getting a sense of our people engaged with the digital content, but we don’t know a whole lot more than that. 

JS: Wow. Really interesting. All right, Steve, I’m going to hand it over to you. So quick bio, and then to that core question: what should the practitioners know from the perspective of research, or, what should practitioners know about visualizing data? 

Steve Franconeri: Sure, I’m Steve Franconeri, I’m a professor at Northwestern in the psych department. I also have courtesy appointments and hang out a lot in design and computer science and the business school. My academic history is I studied visual neuroscience, so visual and vision in the real world in grad school, and worked on more ivory tower style studies of what’s the capacity of your visual memory, how many objects can you track and sort of simplified displays, and then felt like my displays were getting a little too petri dish, and about 10 or 12 years ago, started doing a lot more translational research inspired by the needs of practitioners. So we work a lot in chemistry education, how do we get students’ organic chemistry to represent and rotate that complicated molecule in 3D, and then I think the majority of our work in the lab in the last 10 years has been on data visualization – so how can we leverage the power and avoid the limits of the human visual system when we’re trying to do visual analytics or trying to communicate data to other people’s brains. And just like Jen, Jon, this is not going to be as pugilistic a prompt as you’re expecting. So Jen, my question for you is: help, we need you. I think my career story is one of finding the joys of translationally inspired research and taking the questions of practitioners and using that to guide where we go, and avoiding studying petri dishes. And so, my request is, help us help you, and the sorts of issues that you run into in the real world should be inspiring our research more. I actually love that initial prompt that you gave about paying attention to context and goals, I think that that’s a fantastic direction and we should wait more heavily in the academic world. 

JS: So that’s a really great way to segue into sort of the first part of this, I think, Steve. So Jen, I want to ask, and this is really from earlier conversations I’ve had with both you for today’s session – so one thing that Steve really wants to know, which he sort of just alluded to, is like what should researchers be working on. I think one of the big challenges in data visualization we’ve seen in last few years is uncertainty that we saw especially during the presidential election, there was a lot of rethinking maybe – is that the right term – rethinking how places were doing their estimations, their projections, and also visualizing them, 5:30, it’s a great example. But what are the main things that you think researchers should be working on, and Steve, you should feel free to interject and fill in the things that maybe we don’t know exist, which will come later, how we’re going to break down some of these silos and get us all closer together. 

JC: Yeah, so this first one might be a Petri dish option. But there’s sort of a, I think, an easier way to kind of get into the idea of what a practitioner like myself could use, and it’s similar in theme to the uncertainty visualization conversations that have been happening. But I want to know if people are working on log scales at all, and figuring out a better way to show logarithmic data, and maybe that’s just because I work at a scientific kind of focused magazine. But do people even know how to read log scale graphics like scientists and non-scientists alike, and, are there other ways that we can show that kind of data? One thing we use that for is like star charts where it’s like luminosity and size. So I’m just not sure if people understand what they’re looking at. But as far as things that are kind of more related to that context, and I know there’s so many variables in figuring out how to research this, but I’d love to know if and how graphics add value to a full article, because we’re rarely showing just a graphic by itself, it’s usually couched in some sort of text. So do people spend more time with an article if there’s a visualization included? We have some of those metrics with website analytics. But do they remember what they’ve read more vividly? Does it impact their impression of what they’ve read, and does the style of the graphic that’s within that larger article impact any of those variables? So that’s kind of the core of what a lot of my questions end up revolving around. 

SF: Yeah, I can take a shot at that. So for log scales, yes, there’s some work in this, I put a link in the Discord into a blog post that Jeff Zacks at WashU and I wrote last year, because like the pandemic data, log scale suddenly became really important. If you show the trajectory of COVID infections as a linearly scaled graph, people extrapolate linearly and don’t realize that if it’s going like this now, it’s going to go like this later. And, of course, translating that Y axis to a log scale, you can now do linear extrapolation, but no one really understands the log scale, unless you’re a scientist who’s been trained to use these things, and you’re used to them. So that article has some suggestions, and one of them is to give really concrete examples on the Y axis. If you’re going to put 100, a 1000, 10,000, then give people a sense of what that looks like – this is the number of people on the block, this is the number of people at a public swimming pool, this is your town – to make it more, to link it more concretely to real world experiences. There’s some other ideas in there, but I think that’s probably the most productive one. But there’s more research happening on that one actively. I think a lot of, similar to the other COVID inspired research, that one got people really interested in log scales, because otherwise a lot of the research was from 20 years ago. 

For your second question on how does including visuals affect the way that people process information, a lot of that research comes from the education literature. When you’re putting diagrams into textbooks, one of the surprising things that you find is if you put the diagram over on the side, and the text is here, and the diagram is here, many students will not look at the diagram, which we as researchers and practitioners find insane, because that’s the first thing that we’re going to look at, because we know that we can powerfully extract information from that. And it turns out that learning to read the diagram is a skill, and then, it’s extra hard when the text is separate from the diagram and you have to look back and forth and figure out what parts of the diagram match with which parts of the text. So the prescription from the education literature is to interleave them, actually take that text and pop it into the diagram, which then the work that you are direct at Scientific American absolutely does. So you would rarely have all the text here and then the diagram, you’re putting text boxes with arrows and stepping people through how to read the diagram and guiding them over time, which is exactly what that literature has discovered is so important. 

JS: So to that point, Jen, a lot of the work that you all do at Scientific American is taking this pretty dense scientific research, distilling it down, improving or making even better graphics, and then trying to integrate the story with the graph as well. So can you talk a little bit about that work and that process, and how you think about taking what is maybe the more research literature where those things are kind of separate and bringing those two things together? 

JC: Yeah, so as you implied, we do often start with data that’s been pre-analyzed and published in a peer reviewed paper. So we’re not necessarily doing this with investigative work, we’re doing, okay, this conclusion was, you know, the scientists came to this conclusion, and here is their supporting data. And sometimes our future articles are written by the scientists that actually did that work, and so, we have a direct line to the content experts; so we can get them on a call and talk them through the graphics that appeared in their paper, and kind of really get to the heart of what is the critical bit in here that should really be highlighted. Other cases, sometimes we’re working with journalists, authors, so we’re taking a bunch of different preexisting pieces and kind of putting them together, not in the same chart, obviously, but to kind of create a story. The first thing I’m doing is stripping out jargon, and that also means visual jargon; so like the symbols and chart forms that carry highly specific information within a specific context can be really efficient just to communicate with others that are fluent in that language. But it’s like a brick wall to outsiders. So a lot of my job just revolves around either knocking down those brick walls and kind of reinventing the visualization – so is there a different form that we can use that kind of gets rid of a lot of that visual jargon? Or it’s adding footholds into that wall. So those footholds can be like annotations, aesthetic refinements, changes in color palettes and symbols that just kind of help establish a visual hierarchy, or just including really clear instructions for how to read the chart. But we are sort of approaching a visualization as if we’re walking somebody through it one step at a time, and how can we do that with guiding their attention with color or annotations to kind of take it one step at a time. 

JS: So Steve, to that point of having graphs sort of integrated in the page itself, is there a reason why the visualization research community hasn’t done, what sounds like, I’m not familiar with the education research, but is there a reason why visualization specific researchers have not been – I mean, in my experience, it’s like, here’s a graph, we’re exploring why or how people read this graph, but it’s just a graph. And you did some really interesting work on the connected scatterplot, but it’s not like embedded within a larger piece. So is there a reason why the research community in DataViz hasn’t been exploring these broader merged pieces? 

SF: Yeah, I’m assuming inertia [inaudible 00:18:09] we’re used to it. That’s just the format, we have a visualization, and then we have a caption under it, and you’re constantly looking up and down and looking up and down, and some text over on the page and you’re looking up and down. And that’s just the way that we typically do it, and we keep doing it. But in some of my papers, I like to have a single figure with words in it that just explains the whole paper. And that actually is the first thing that I make, and I realize that I think about my own paper in a different way, once I do that, because I can see everything holistically. Things are changing a bit, I know that when I read one of Matt Kay’s papers; I know that they have little graphs that show the distribution that create that meme that’s being quoted in the paper, there’s a little graph right there in line with the text, and data, comments, etc., there are initiatives to start to interleave language and visuals more effectively, and I’m excited to see those developments. 

JS: That’s great. 

JC: Yeah, and if I might jump in, I’m thrilled that there’s progress on that front, because it is really hard, as a practitioner, to be reading some of the literature, and not seeing the guidelines being enacted by the people who are saying this is what you need to be doing. It’s sort of a, well, show me, you know. And so, it’s hard to take some of the guidance seriously, if it’s not being actively used. That said, I also understand that journals often have very strict publishing rules and protocols in place. So I’m really looking forward to when that really kind of takes off and we can actually start to see a lot of the advice in action. 

JS: So I want to flip this initial question over to Steve. So we started with, for Jen, what should researchers be working on? So for Steve, what are two or three things that you wish all DataViz practitioners knew or understood about cognitive neuroscience, cognitive science, like, what should we have in mind, like, what are the top three things that we should have in mind when we’re making a graph or a dashboard or a longer piece, a longer article?

SF: I think that folks like Jen already do this, but I’d say for all practitioners, two things I’d call out would be, so the types of storytelling techniques that practitioner guides and books talk about, I think those are really important. And then critique would be the second thing. So for storytelling, everybody knows that your visual system is very powerful, 40% of your brain, etc. But that visual system is really good at locking into single perspectives. If you have multiple patterns you could see and something on the screen, you tend to lock into one. So imagine that duck-rabbit ambiguous figure, that illusion you’re all familiar with, if you show some – there’s these great experiments from the 90s, where if you show the duck-rabbit, and someone looks at it, and they say it’s a duck, and then you take it away, and you don’t let them see the actual image again, and then you tell them, think about it in your own head, could you see anything else there. They go no, because the human brain is really good at locking into a single organization, a single perspective. But then you put it back on the screen, and you say, is there anything else, and now people can reassess. So human brains are really good at doing that. And when you have a data visualization or a diagram, there’s a series of perspectives that you have to take on it. You have to look at this difference, this trend, this statistic, etc., and it takes time to savor that. One of my favorite quotes on this topic is from one of my heroes in the graph comprehension literature, Priti Shah, and it’s reading a graph is not like looking at a picture, it’s like reading a paragraph. And this is something that I’m trying to repeat in every public forum that I’m in, because I really believe that that’s true, and I think she nailed it 15 years ago, when she was writing papers on this topic. And it takes time to sort through those perspectives, not for everything. So Jen has a nice framework of representative illustrations, more pictorial graphics, and it’s a dinosaur, it’s a virus, you get it within a couple 100 milliseconds, but as soon as you have a diagram or a data visualization that’s more than trivially simple, there’s a series of perspectives that you have to take and it takes time. And there’s many paths that you can take to do that. There’s a series of interpretations. That paragraph has many ways that you can write those sentences, and it can go in the wrong direction or the right direction. So the storytelling techniques, highlighting, annotating will guide your readers to seeing the right patterns, and people don’t always do that. And the reason that they don’t do that is they have a bit of a [inaudible 00:22:45] curse of expertise. It’s a duck-rabbit, you should see a duck, you see a duck, and you know what, people are really bad at realizing that other people see the rabbit. Human brains are terrible at taking the perspective of other people. We use our own experience to simulate what other people are seeing, and that leads us to assume that we’re communicating a lot more than we are, and people see different things. So that’s where critique, that second aspect comes in. Once you’re an expert, you see the right pattern in the visualization, maybe you think that your storytelling is good enough, someone like me, maybe I’m decent at it, someone like Jen does it, and it’s probably fine as it is, right, because it’s just so much experience in this. But in general, getting critique is critical. Put the visualization, the diagram in front of a group of other people, and ask them what they see, and whether they get it, and collect hard data on whether they see that complex paragraph in the same way that you do. So I’d say storytelling and critique would be my two that I’d pull up. 

JC: Yeah, if I could, if you don’t mind, would jump in there. I love hearing you say that, because I think, at least in the journalism world, or, at least, I should probably only speak for myself here, but we’re very good at asking our colleagues for feedback as part of the process. But I think, I personally need to get a lot better at trying to figure out how to ask my intended audience for that critique and that feedback, because my colleagues are coming at it from different points of view, but we also have a lot of shared things that we’re looking forward in article, like, we’ve all read the draft of that manuscript already, we can’t get that out of our head. Whereas, yeah, trying to figure out how to get the cold reader to critique something is, I think, something I’ve dismissed as, well, that’s too hard, because we’re working on embargo or this or that, but it seems like it’s a pretty critical thing that I need to figure out how to do. 

SF: And the time aspect is hard to making the time to do it. I’m a little bit of a practitioner as well, in that I teach sort of science communication, and all my classes are called something like presenting your research or communicating your research or the undergrad one is sometimes show and tell, and in all of these cases, we talk about critique, and invariably, this question comes up of, well, I’ll kind of show other people that know this topic, because I’m making it the day before, and I run into this as well, and the advice that comes up in the room typically is pre-book a meeting with people that are outside of that group two weeks before to kind of commit yourself to it, and that’s what I wind up having to do myself. I book a lab meeting for research talk a week before the talk, and I’ll feel bad if I cancel it. And then that gives me a chance to test out the material. And then we’ll try to make sure that we have some undergraduate students who are unfamiliar with the work in the room so that we are not only getting advice from folks that know the area really well. But it’s really tough to plan ahead to do that. 

JS: And Steve, you mentioned in this process of critique, collecting hard data. So for folks, Jen already has mentioned like page views and time on page, but like when you’re doing that little, I’d call it, sounds like more like an informal focus group, like, what are the hard data elements that you’re trying to collect? 

SF: Oh yeah, I had a curse of expertise for that, I said it, and that didn’t make any sense. I mean, don’t just imagine that people understand things, and don’t even take their word when they nod, because they’re being too nice or they don’t want to look like they didn’t get it. And so, if you have a culture 

where this doesn’t feel mean to ask if you can ask people – can you summarize what came out of that presentation, you know, can you can you tell me what pattern you’re supposed to be seeing in this graph, is it totally clear to you – and actually getting real data from their responses, the way that we do an experiment, instead of just an assessment from their perspective of whether they got it, because humans are also really bad at that, people think they understand things until they need to explain it or state it. Same thing happens to me as well when it comes time to teach or write the paper, I realized that I didn’t actually understand that topic as well as I did. So hard data means try to get them to regurgitate information and use that instead of their own assessment. 

JS: So there’s a question in one of the various chat windows that’s relevant to this part. So before we sort of switch gears, I want to get over that. And for folks who are watching, feel free to add your questions to any of the various ways that you can send in questions, and we’ll talk for another 15-20 minutes or so, and then have plenty of time for Q&A. But Laura put in a question about what do you think about designing visualizations that are targeted and meaningful for both experts and for general audience, and I’ll go to Jen first on this, because I suspect you have this challenge all the time. You have sort of a general reader of Scientific American, and then scientists who are reading the magazine as well or the publication as well. 

JC: Yeah, and often our author as a scientist, so it’s also a bit of a trying to convince them that this way, although it’s not the way they would have done, is still valid for them and their colleagues as well as our broader readership. This is one of my favorite challenges is kind of like the makeover challenge. It’s like, okay, here’s this dataset, how can I make it over in a way that surprises and delights the specialist, and helps them see things in a slightly different way, whether that’s not necessarily seeing a pattern they had before, but maybe, but also seeing something in an aesthetically kind of pleasing way as well, or something that kind of triggers a different emotion or connection than just kind of that analytic part of their brain. So it’s sort of one of my favorite challenges is how can we create something new from this dataset or from this existing chart that delights and engages and provides information to a broad range of audiences. One of the ways I do that is by bringing in freelance designers that think outside of the box, and have a reputation for doing that. Like, I’m thinking to a wild B piece that [inaudible 00:28:59] did for us years ago, that still kind of makes me smile, because it kind of created, it presented the data in a way that the scientists hadn’t seen it before, and in a way that was just really kind of nodded to the topic behind the story. It wasn’t just a chart that felt anonymous and disconnected from the content it was showing. It became kind of like that be, you know, the hexagon pattern was kind of embedded in it, and not in a gimmicky way, in a way that really kind of worked. So it’s especially just a challenge of trying to figure out how to honor that data, but kind of provide another connection, another way for people to connect to it. 

SF: Yeah, just to follow up on that, I’ve been amazed at how the kinds of work that happens – my hero for this is Jen’s work at Scientific American or the work that data journalists will do where I would think that to show this dataset, you’d have to reduce it down and only show the bar graph version, you couldn’t show that complex network analysis, you can’t show parallel coordinates. But these folks can do it. They step people through how to read these charts, and they still leverage the power that these more advanced moves have, but they still step people through these explainers of how to read them. So I would typically think that I would need to distill it, but given the thoughtful design that goes into them, they’re great at teaching people how to read them, and therefore maintaining that extra power that those more advanced visualization types carry. 

JC: It’s been said by many people, I know Nigel Holmes and Alberto Cairo have said this, but it’s all about clarifying, not simplifying. And so, kind of using that mantra and sort of trying to figure out, yeah, how can we clarify this really complex thing in a way that will surprise and delight the scientists as well, because they’re expecting that we’re going to have to strip it down to its very basics, kind of, as you alluded to. 

SF: Yeah, so you’re both doing a really nice job of helping with the segues from one section to another. So I want to turn to this idea we’ve already been talking about, about helping people understand how to read different types of data visualizations. So Steve, you started earlier that there’s sort of like these basic graphs that we all sort of know and understand almost instinctually. Is there a research B – so first, I guess, it’s sort of a two-part question. So first, like, what do you put in that little box of the graphs that we can basically assume everybody knows how to read, and is there a research base for that other than just like, kind of, know that everybody just knows how to read a bar chart? 

SF: I think I could – so the list is going to be the things that you knew up through eighth grade, and the bar charts, line charts, pie charts, stacked bars, etc. The things that are not on that list are going to be [inaudible 00:32:07] and connected scatter plots, and then all the way up to fancier things like parallel coordinates. And I don’t know of – I know that there are folks doing research on what lay audiences tend to understand. I don’t know if I can quote the folks that [inaudible 00:32:25] does some great work along these lines. But I don’t know if there’s a paper that curates if you are a typical member of the public, will you understand visualization acts, that’s actually a – and then think about different populations, that’d be a great paper. But I don’t know that anyone’s curated something at that high of a level. 

JS: So then Jen to you, how do you think about this from the publication from the practitioner’s side, how do you think about this balance between graphs that we expect people to understand quickly and easily versus the more its graph as a great example, that’s going to take more time, people are going to have to investigate it, they’re going to have to engage with it more than just saying, oh, that’s a line chart, this line’s going up, that line’s going down? 

JC: Well, in all cases, we have a chart title, or, not even just a chart title, we have a box title, introductory text, then the visualization. So we’re already setting people up with a, here’s what you’re looking for, here’s what’s significant. So even if they are struggling to read something, they’ve already been primed, much in the ways or sort of saying, here’s the duck, look for the duck, and then hopefully, they can see the richer context with that full graphic. But ultimately, we’re trying to prime them for success. In terms of helping people with more kind of bespoke solutions, because sometimes we’re running visualization solutions that don’t even have necessarily a name, you can look up how to read it, we just really, conversationally, say here’s how to read the graphic, like, we’re not doing a key or a legend that just shows the colors and the patterns, but really just say, literally, right, every dot represents a star, the color of that dot represents this. And just as if you were really walking somebody through it, like, you were just telling your friend next to you, like, okay, here’s a dot, that’s what that means, and the color means this, and the distance means that, and just really in a conversational way, in plain language, just kind of set people up for success that way. 

SF: And I think that’s a great rule, great designers have an intuition for this. But typically, something more complicated goes up on a PowerPoint slide, and it’s just, here you go, and the author just starts talking over it because of that curse of expertise. And I think adapting those same techniques for anyone communicates data with an unfamiliar representation, step through it one thing at a time. [inaudible 00:34:59] let’s just show the X axis. Gray out everything else on your slide and actually just show that. And now let’s just understand the Y axis. Now, here’s one point, let’s understand how that works on both axes, and you know what, the size varies too, and here’s how to think about that. Now people are ready for more complexity. Now you can throw on more points, so you can add in those other dimensions. But I love that technique of stepping things in one element at a time. Let’s talk about one variable or one way of visually representing variables at a time, and it’s something that that curse of expertise typically prevents for presenters and authors. 

JS: So Steve, on the research side, when researchers are testing these sorts of things, so they’re testing whether people understand how to read a scatterplot or a connected scatterplot, or any of these other graphs that we’ve been talking about, do you feel like bringing people into the lab creates a sort of false community that that’s not really how people are interacting with these visualizations out in the world, I mean, you mentioned – I’m asking this question, because you mentioned Evan Peck, who has this somewhat famous paper where he actually went into farmers’ markets in central Pennsylvania and actually sat down people and asked them specifically, so I’m curious about the balance between we’re bringing people into our lab, who are undergraduates or graduate students in the university, or we’re using the Mechanical Turk versus going out into the community and actually like sitting down with people? 

SF: That’s a great question. We typically divide this into two categories of study in the lab, one is going to be what can the visual system do, if you know how to use it in the right way. So if you want to judge a correlation, and you’re using a scatterplot, you’re going to be great at it. If you know what to do, if you do it with some other, two-bar graphs or parallel coordinates or something, you’re going to be worse at it. And even if you really, really know what you’re doing in both cases, you’re going to be worse at it. So there you’re studying the power and limits of the visual system and what it can compute, and in those cases, I wouldn’t think there’s going to be a lot of variability among people, if you do take the time to teach them how to read it, this is what human brains are capable of. And so, that’s one end of what we another study. But then the other end is exactly where you’re getting this understanding question, do people know how to turn the knobs and their visual system and know what patterns are relevant and know how to move through a sequence of views of the data to read that paragraph over time, and there’s going to be huge individual differences in that. And in those cases, people tend for convenience to study crowd workers, Mechanical Turkers, and that’s why that that work by Evan Peck was so exciting that he broke out of that model, and actually worked with real folks, which is really important, if you want to communicate science to the rest of the world. 

JS: I’ll turn that back to Jen, because Jen, you mentioned earlier how something that you all used to do have focus groups, maybe a little bit more when we were sort of a print-first world – if you had, and I know you don’t, but if you had unlimited time and unlimited budget, how would you think about doing this sort of, I would call it, I guess, I would call it research or how would you do this sort of research or focus groups now, especially that we’re in this digital first world, which is, I would guess, and also COVID where everybody’s isolated a little bit in their rooms in front of their computers, like, how would you think about doing this on a practical level from Scientific American? 

JC: Well, maybe this isn’t a practical concept, because of the time and the money involved. But I would love to have that feedback earlier in the process, like, I’m reading a manuscript, and I think I know what needs to be visualized to help somebody give them more context, like, what point in this article would benefit from a graphic or from a data visualization or whatnot. And so, even starting from there, it’s like, is my instinct correct on that front, or is somebody who’s a cold reader saying, well, actually, I think this other point is something that I want to see before I believe and take your word for it. So just kind of understanding if I’m, first of all, illustrating the correct things that are answering questions that people have, but then sometimes we’re exploring different ways of solving that problem. And so, at that stage, is there a sense of, oh yeah, this answers my question more clearly than that approach would. So there’s a few steps along the way, but mostly, I think it would be, if we were just waiting till the end and kind of doing a focus group piece on it, just asking questions, like, first of all, did this graphic add to your experience here, do you feel like you have a greater understanding because of it. And also, as you mentioned earlier, Steve, to have them actually summarize what they got from it, because we don’t want a yes or no answer from that, just to kind of see if the goals aligned with what actually is being interpreted at the other end. 

SF: Just briefly mention that I put in the Discord two links to folks that are looking to build platforms where you can find more diverse audiences in terms of their graphical literacy levels, there’s a Katharina Reinecke’s Lab in the wild, and then on the psych side there’s testmybrain.org, that one’s meant to pull people in with the tees of getting some stats about your brain. But really, it’s a way of getting people engaged with that sort of research that don’t typically do it. So those are at least efforts to try to do these kinds of things digitally. 

JC: I’m also reading Sheila [inaudible 00:40:40] work on field research, kind of, I think it’s a field guide, or, I can’t remember the title off the top of my head, this is horrible. But she has some interesting ideas there that kind of are helping me try to figure out how I can wedge this into my workflow. 

JS: So on that note about practitioner limited time, limited budget, as it were, I guess, it’s really a question to start with Steve. But like, for those sorts of practitioners, which I think is sort of most of us, with limited time, limited budget for the general practitioner, do you think it’s more important for them to learn about broad cognitive science concepts or should they watch for the latest DataViz research and sort of best practices? Where should the practitioner with a limit amount of time spend their effort in the DataViz research community or field? 

SF: I wouldn’t think that it would be an effective use of time to go and try to read all of the proceedings of Vis given the amount of time that’s available. One great thing about this field is that there are a lot of really smart people that write books and write blogs and make YouTube videos, etc., [inaudible 00:41:55] Jon you’re one of them. And actually, the person who’s collected the best set of these, I think is Jen, so I’m putting her link [inaudible 00:42:04] what why when how into the Discord. So check that out. And Jen has collected a great set of resources on science communication that are focused on data visualization. And I looked over her list and I don’t have much to add to it, I think it’s great. So there’s a lot of great blogs in there that can – will be able to – let me see if I have a – here’s a YouTube link to a talk of hers where she reviews a lot of this too. So just listen to Jen is my advice. 

JS: I think we should get that made as a T-Shirt, and just have that. 

SF: And I will say that, so IEEE VIS does do a nice job of having a guideline section at the end of a paper, right? I don’t know if it’s a formal requirement, but it is an expectation that you will not only say what your results are, but you concretely say what this means for the real world, that is not something that happens in the psychology paper. In fact, if you put that in a pure psychology paper, it’s not going to look good, because it makes you seem less theoretical, in some sense, [inaudible 00:43:05] that’s applied research, which I think is absolutely silly. And there’s also a practitioner statement that we need to write that is a short paragraph that sums up for the practitioner what they should take from this research. Now, I still don’t think that reading all the practitioner statements for the entire conference is going to be a good use of the time. I would go in a targeted way to do that. And I would start with the sorts of guides that are curated by folks that actually do serve as that bridge between the academic literature and practitioners. 

JS: So I’m just going to let folks know, so we’re about quarter to the top of the hour, we have about 15 minutes left for this discussion. If you have specific questions for Jen and/or Steve, feel free to drop them in the Discord or in the Slido, and I’ll bring them up. But until I see more questions, I’ll just keep asking my own questions, because I just have more interest in this. So Steve, would those links – this is a question for really both of you – how do we bring the two branches of the, well, these two branches of the field, I don’t want to say the two branches, these two branches of the field, how do we get them closer together? Is it a matter of practitioners reviewing this list and reading blogs, and researchers reaching out to practitioners to involve them in their research practice, like, what are the sorts of things, and this can be aspirational, it doesn’t have to be, you know, you can have things that you’ve seen or that techniques you like or also aspirational, but where how do you see a path forward to bring these two branches together? And whoever wants to start is totally fine. 

JC: I can start there. Events like this one, IEEE, I feel like, Jon, you’ve been chipping away at this for a while, I feel like in Chicago, I came to IEEE for the first time to be on a panel that you had organized, and it really opened my eyes to what was going on in the research field. And it’s hard to make the space in your time, in your schedule to attend something like this, unless you have a direct invitation. So I feel very fortunate that I was kind of pulled into it, it kind of opened my eyes. Also, I feel like there’s a few other events that are starting to do it more too, like, Information+ I found to be really useful to go through their talks, because that’s another place in which researchers and practitioners are both presenting within the same context. 

SF: And my answer will be active collaboration, we do a lot of this in our lab, and it has absolutely changed my research life. Just to give an example, we have a new set of projects we’re working on now, a new first year grad student just came into the lab, Oshun, and she’s going to work on dynamic displays. So like the Hans Rosling display where it moves around. This has been a topic at this conference for a while now. And we sat down for our first meeting and started thinking, well, what would be important here, I bet it’s important if this happens, or, I bet that this is a limit, and we caught ourselves predicting what actual practitioners care about, and we said, no, we better actually talk to practitioners. And the plan is that we’re going to not do anything until we interview that person who makes educational diagrams, that move, that show a physics simulation of molecules or a data journalist who needs to have that scatterplot bouncing around in JavaScript somewhere on the internet. So the first stage is to actively interview and work with those folks, and then keep them involved throughout the rest of the project to keep us on track to make sure that we don’t wander off into those more convenient petri dishes that can be easier to deal with, but then the problems don’t become as interesting. 

JS: So Steve, on that one that what are your thoughts as to why, and I would put economics, my field into the same group, like, why do you – is it just inertia why researchers haven’t been doing this more, I mean, I’ve been making a case more recently that like more quantitative folks are folks who are trained in sort of quantitative methods – well, I’ll put it this way – for me personally, I was trained in lots of quantitative methods, but never took anything new or qualitative methods course. But everybody I know who’s like, does qualitative research primarily has some quantitative training, they know how to clean a dataset, they know how to run at least a regression. So is it just that it’s this inertia, it’s just the training that’s been going on for decades that hasn’t pushed people not having these conversations, like, what does it take to move the camps together? So just like more people like you too, like, saying, yeah, we need to build these bridges, or is it something else, something bigger? 

SF: I don’t know. I’ve personally been trying for a while, ever since seeing to light myself to do more evangelism. But it’s tough to do because the field doesn’t expect it, and there really isn’t an incentive structure for it. You can publish things in that more ivory tower petri dish model, at least on the psych side pretty easily, and it’s tough to get people to do that extra work. The granting agencies focusing more on work with real world implications is certainly helpful. And I don’t know, maybe there needs to be more critiquing happening within the fields, and that’s hard, I don’t want to be mean. So maybe less of an incentive structure of asking people to do it and more picking out when they don’t. And I could say this as someone who wandered off into petri dishes many times – my last year of grad school, I was studying little squares for the sake of studying little squares, because someone else had done it before me and someone else had done it before them. And especially when you’re just getting into a field, you tend to look at what the more senior folks have done, and you tend to do that, because that’s the thing that you’re supposed to do, and it’s tougher to take that risk to go out into the field and find new, new problems. And I had the luxury of doing that mostly post tenure. So there’s all these constraints, I hesitate to psychoanalyze the field too much, but it is a tough problem. 

JS: And so, Jen, you mentioned a couple of – when we first started, you mentioned a couple of particular challenges that you think would be good candidates for research and you kind of want the answer to log scales, you mentioned is a big one. Do you have in mind, like, how other practitioners can seek out researchers to get the answers to those questions? I’m sure there are lots of folks out there who have similar questions and they might be very small things that maybe they don’t think is worthy of research, but really is just one off the top my head, like Robert Kosara and Drew Skau did a couple of papers on how do we read pie charts because no one had actually ever done that study before. So like, from your side, how do you think practitioners can get those questions in front of researchers to get that research base? 

JC: From my side? 

JS: Yeah. 

JC: Twitter, I don’t know, I wish I’d know, actually coming in and engaging with people at events like this, I have a [inaudible 00:50:25] and then at Information+ a while ago, I met some more researchers who were also there – they want to know what questions to ask and what to study. So I think just kind of finding these opportunities to meet with folks, and then sort of put a bug in their ear, if they’re looking for something, I’ll give you some questions I have, they may or may not fit with your area of specialty or whatnot, but at least it gets a conversation going. So I think it’s just a matter of starting to follow, if a piece of research answers one of your question, do some research on who that author is and what else are they working on, like, I check out people’s websites, their academic websites, to kind of see what other papers they’ve read. Every once in a while one of these what kind of, a paper will hit the mainstream like Michelle Borkin, like, what makes a chart memorable. I feel like when something like that hits a broader audience, find out what else is she working on, what are her collaborators working on, where did she present that piece, and what else are they doing. So I look for these little kind of windows that open up and then just try to dive in a little bit more. 

SF: I like that Twitter tends to work pretty well, maybe there needs to be a hashtag declared, like, IEEE VIS speed dating or something like that, where you can have our practitioners and researchers meet up. I should say, by the way, my critique of the Ivory Tower [inaudible 00:51:45] tends to be more from my cognitive psychology hat. I’d say the DataViz field does care about qualitative methods, or folks that do design studies and get into context with, particularly with scientists, I could say names like Miriah Meyer, Tamara Munzner [inaudible 00:52:00] Jason Dykes, and these are all people that do in depth contextual work with experts, and then take the lessons from those studies and extrapolate them. So it’s not that everybody gets stuck. It’s just that the field in general probably does a bit and especially the cognitive field that is my birthplace. I’d say that one, I could critique a little more strongly. 

JS: So in terms of these partnerships, relationships between these two sides, do you have any tips for how the two groups can work together, given that they have very different timeframes? I mean, Jen, you already mentioned that you have, like, you have to get the product out there, and it’s got to go. And Steve, the academic timeline’s a little bit longer most of the time. So any thoughts or tips on how to blend the timeframes for these two different groups? 

JC: Well, from my point of view, the Scientific American is 175 years old, and well, it’s a little older than that now, we can tend to repeat some of the same topics every three, five, 10 years. And so, we have this steady march of graphics that have been done in different styles to different eras, and then different ways because like, the way we approach visualization has changed as our audience has, so I feel like we have like this wonderful archive of things that you want to see how neutrinos behave and how people illustrated that 15 years ago, 10 years ago, five years ago today. So I feel like in some cases, diving into the archives of publications would allow for some natural variation and kind of different ways in which something has been presented over time. So maybe there’s a way to do it pulling from the old in different ways for a similar audience. 

SF: That, for example, is a very cool project. If I were a first year grad student diving into the archives, and seeing how those similar ideas have been communicated in different ways over time, and how sometimes the way that they’re shown can cross domains of science, and sometimes it can’t, because it’s specific, and then maybe later doing some AB testing and finding out which ones are best and why. So this, I think that’s the perfect example of the kind of inspiration that folks in the field should be looking for. 

JS: So we have basically two sort of more questions from viewers, one is on getting information about the effectiveness of visualization by analyzing the website. So Jen, you mentioned time on page and number of clicks, sort of, like the metrics that we kind of all use. And, I guess, the question is really about developing other metrics and is there an appetite for doing that, and then, what might those metrics be. I think that latter question is for both of you. But I think the first question is like, is there an appetite for better metrics, specifically around data visualization that Jen could help you and your team do a better job, understand how people are using your content better? 

JC: Yeah, speaking from a completely naive point of view, in terms of technically how this could work, the eye tracking kind of thing is what are people looking at these things, and then being able to ask questions on comprehension afterwards, and just in terms of, did this change how your take-home message is from the article itself. A lot of those are sort of pie in the sky ideas though, because even if those tools can be made, they don’t always play nicely with content management systems of different organizations. So even when might work for the New York Times wouldn’t necessarily work for Scientific American, etc. So I think it’s hard because even if this tool exists, and people say, oh you should use this, it’s like, well, yeah, I can’t. It won’t play dice with the rest of the pieces in this. But in theory, I would love to know how people are actually reading through things and then be able to ask them questions afterwards. 

SF: I think that sums it up nicely. At the moment, you can look at engagement, do they click, how long do they stay, these are all things that are tractable for web interfaces. As soon as you want to eye track, you’re going to turn on either bring people to the lab or get permission to turn on their webcam, which is a lot of people are not going to be game for. And then, you can AB test by, if you can get the content management system to randomize either version A or version B, you could see how that affects engagement and reading time, but that’s a technical hurdle to get around. And then, finally, if you really want to find out what they understood and what they didn’t, having questions, qualitative text boxes, multiple choice questions, etc., would be great, but is the average, you know, is the reader going to do that and take the time. And if so, what’s the bias sample of readers that you’re getting that is willing to do that, etc. It’s tougher to do this. So these things are possible, we had a project that we were tinkering with, with the City of Chicago’s data office for a while where they wanted to explain machine learning models where, if the beach is closed today, we don’t actually know that the bacteria level is too high, we have a model that suggests that it is, and people are mad because they can’t go to the beach, and they really want to explain that super clearly. So we were going to test different ways of explaining these basic models to people, and that was a place where the infrastructure was available to be able to AB, and you could see whether people make it through the page. But again, are people going to answer questions and say, are you satisfied with this explanation, and then only some do, and you run into bias samples, so it gets tougher. That’s where the lab based research can be handy. If you can properly model the context of the original person looking, right, it’s the Mechanical Turker that you’re bringing in to read this explanation, really have the same perspective as the parent who’s mad that they can’t bring their kids to the beach, maybe. So that’s a place where having both sides is the quant and the qual, the lab and the context is the only combination in the end that I think is possible for many of these problems. 

JS: Yeah. We’re almost out of time, so I want to close up by having you each tell people where they can get a hold of you, and this kind of this idea of, let’s bring the two groups together; so practitioners who have ideas for research, need things solved, researchers who have ideas for Jen. So we’ll go with Steve first. So Steve, what’s the best way for folks to get in touch with you, so that they can pitch their ideas to you, and you could go off and solve their problems? 

SF: I’d say my email is a good one. It’s just my last name @Northwestern or Gmail. But even better is to use Twitter, so just take off the N @SteveFranconeri, and that is more fun because it gets the rest of the data visualization community involved. Well, someone will respond, we’ll create a conversation, it gets more people involved. So that’s the one I’d really suggest. One other thing to note that you might be interested in, I’m going to put this in the Discord as well, there’s a journal called Perspectives, excuse me, Psychological Science in the Public Interest, and I and Jessica Hullman, Priti Shah, Jeff Zacks and Lace Padilla have a review paper of psych work on data visualization and we cover a synthesized work from data visualization, graph comprehension, cognitive science, etc. And hopefully, that might be of interest to folks to get them at least cued up first on what’s known from the psych side, so that they have a good baseline to be able to ask questions about things that are yet unknown. 

JS: That’s terrific. I’ve actually seen that paper, so you should look forward to, it is quite a good, I would say, a good intro to this whole field, it gives you a really deep look. Jen, best place to get a hold of you for those researchers who have ideas on how to do these tests or other things or things they want to pitch you for Scientific American. 

JC: Sure. A great example of the divide between practitioner and researcher right now is that I don’t think I’m actively working with Discord, I don’t know how you all do it with these conferences with like five different ways of doing this. So I don’t think it actually worked, but you can find me on, well, my website is just jenchristiansen.com, my name without space. On Twitter, ChristiansenJen, no space. Those are great ways to get ahold of me. And I am accepting pitches for graphic science pages. So if you have ideas on that, maybe we can get some visualization research onto that page. I should probably have done that already. Maybe you’ve done a bit. So that would be a great way to help give a megaphone to some of the visualization research going on out there. 

JS: Terrific. Thanks to you both, Jen Christiansen, Steve Franconeri, thanks so much for doing this, having this discussion, breaking down some of these walls. And thanks everybody for attending and tuning in, we’re looking forward to the rest of VisComm, and I’m going to hand it back over to my co-organizers, and we will be back with our next full session. Thanks again. 

Thanks to everyone for tuning into this week’s episode of the podcast. I hope you’ll check out some of the resources and references that were included in that conversation. I’ve listed them all out in the episode notes to this show. If you would like to support the show, please share it with your friends, family, neighbors, anyone who you think would be interested in a data visualization podcast. You can share all the links on your social networks. If you would like to support the show financially, head over to my Patreon page, I’ve got new goodies ready to send out to you. You could also provide a one-time donation using my PayPal account. All of this is linked on the show notes page. So once again, thanks for tuning into this week’s episode of the podcast. Until next time, this has been the PolicyViz podcast. Thanks so much for listening. 

A number of people help bring you the PolicyViz podcast. Music is provided by the NRIs. Audio editing is provided by Ken Skaggs. Design and promotion is created with assistance from Sharon Stotsky Ramirez. And each episode is transcribed by Jenny Transcription Services. If you’d to help support the podcast, please share it and review it on iTunes, Stitcher, Spotify, YouTube, or wherever you get your podcasts. The PolicyViz podcast is ad free and supported by listeners. If you’d like to help support the show financially, please visit our PayPal page or our Patreon page at patreon.com/policyviz. 

The post Episode #205: Steve Franconeri and Jen Christiansen a VisComm Workshop appeared first on PolicyViz.


share








 2021-11-16  1h2m