Tagged: Uncategorized Toggle Comment Threads | Keyboard Shortcuts

  • feedwordpress 07:30:43 on 2017-10-26 Permalink
    Tags: Uncategorized   

    See our applications 

    http://danariely.com/2017/10/25/applications-now-open-common-cents-lab-partnerships/

     

    partnership_icon_nologo


     
  • feedwordpress 11:31:25 on 2017-10-25 Permalink
    Tags: Uncategorized   

    Applications now open: Common Cents Lab Partnerships 

    We are looking for our next cohort of credit unions, tech companies, banks, non-profits, and government organizations to partner with, to find and test interventions that help Americans improve their financial well-being. Our open call is online now, taking applications until November 15th.
    Each year, we collaborate with chosen financial services providers to custom design, test, and launch new features and products that aim to increase financial well-being for 1.8 million low- to moderate-income (LMI) households in America. Partners have the opportunity to work directly with expert behavioral scientists to design solutions to many of our toughest financial decision-making challenges. Click here for more information, or click below to apply.
    Some social proof:
    Common Cents Lab has not only taught us about behavioral economics and how we can help our members have better savings and use the credit union more, but also about a methodical process to test and design our products to better match member needs.
    – Vicky Garcia, SVP Strategy and Risk Management, Latino Community Credit Union
    “The Common Cents partnership was instrumental in helping us develop features that drive substantial savings for our customers,”
    – Ethan Bloch, Founder and CEO of Digit.
    “Common Cents had added rigor to the way we build new features that improve our users’ lives,”
    – Jimmy Chen, Founder and CEO of Propel.
    Some of our press:

    We hope to see your application. Please visit: apply.commoncentslab.org


     
  • feedwordpress 01:37:04 on 2017-10-05 Permalink
    Tags: Uncategorized   

    Your Data is Being Manipulated 

    Excerpt from “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Sergey Brin and Larry Page (April 1998)

    What follows is the crib from my keynote at the 2017 Strata Data Conference in New York City. Full video can be found here. 


    In 1998, two graduate students at Stanford decided to try to “fix” the problems with major search engines. Sergey Brin and Larry Page wrote a paper describing how their PageRank algorithm could eliminate the plethora of “junk results.” Their idea, which we all now know as the foundation of Google, was critical. But it didn’t stop people from trying to mess with their system. In fact, the rise of Google only increased the sophistication of those invested in search engine optimization.


    “google bombing” — diverting search engine rankings to subversive commentary about public figure

    Fast forward to 2003, when the sitting Pennsylvania senator Rick Santorum publicly compared homosexuality to bestiality and pedophilia. Needless to say, the LGBT community was outraged. Journalist Dan Savage called on his readers to find a way to “memorialize the scandal.” One of his fans created a website to associate Santorum’s name with anal sex. To the senator’s horror, countless members of the public jumped in to link to that website in an effort to influence search engines. This form of crowdsourced SEO is commonly referred to as “Google bombing,” and it’s a form of media manipulation intended to mess with data and the information landscape.


    Media Manipulation and Disinformation Online (cover), March 2017. Illustration by Jim Cooke

    Media manipulation is not new. As many adversarial actors know, the boundaries between propaganda and social media marketing are often fuzzy.Furthermore, any company that uses public signals to inform aspects of its product — from Likes to Comments to Reviews — knows full well that any system you create will be gamed for fun, profit, politics, ideology, and power.Even Congress is now grappling with that reality. But I’m not here to tell you what has always been happening or even what is currently happening — I’m here to help you understand what’s about to happen.


    At this moment, AI is at the center of every business conversation. Companies, governments, and researchers are obsessed with data. Not surprisingly, so are adversarial actors. We are currently seeing an evolution in how data is being manipulated. If we believe that data can and should be used to inform people and fuel technology, we need to start building the infrastructure necessary to limit the corruption and abuse of that data — and grapple with how biased and problematic data might work its way into technology and, through that, into the foundations of our society.

    In short, I think we need to reconsider what security looks like in a data-driven world.

    Shutterstock by goir

    Part 1: Gaming the System

    Like search engines, social media introduced a whole new target for manipulation. This attracted all sorts of people, from social media marketers to state actors. Messing with Twitter’s trending topics or Facebook’s news feed became a hobby for many. For $5, anyone could easily buy followers, likes, and comments on almost every major site. The economic and political incentives are obvious, but alongside these powerful actors, there are also a whole host of people with less-than-obvious intentions coordinating attacks on these systems.


    Piechart example of Rick-Rolling

    For example, when a distributed network of people decided to help propel Rick Astley to the top of the charts 20 years after his song “Never Gonna Give You Up” first came out, they weren’t trying to help him make a profit (although they did). Like other memes created through networks on sites like 4chan, rickrolling was for kicks. Butthrough this practice, lots of people learned how to make content “go viral” or otherwise mess with systems. In other words, they learned to hack the attention economy. And, in doing so, they’ve developed strategic practices of manipulation that can and do have serious consequences.


    A story like “#Pizzagate” doesn’t happen accidentally — it was produced by a wide network of folks looking to toy with the information ecosystem. They created a cross-platform network of fake accounts known as“sock puppets” which they use to subtly influence journalists and other powerful actors to pay attention to strategically produced questions, blog posts, and YouTube videos. The goal with a story like that isn’t to convince journalists that it’s true, but to get them to foolishly use their amplification channels to negate it. This produces a Boomerang effect,” whereby those who don’t trust the media believe that there must be merit to the conspiracy, prompting some to “self-investigate.”


    Hydrargyrum CC BY-SA 2.0

    Then there’s the universe of content designed to “open the Overton window” — or increase the range of topics that are acceptable to discuss in public. Journalists are tricked into spreading problematic frames. Moreover,recommendation engines can be used to encourage those who are open to problematic frames to go deeper. Researcher Joan Donovan studies white supremacy; after work, she can’t open Amazon, Netflix, or YouTube without being recommended to consume neo-Nazi music, videos, and branded objectsRadical trolls also know how to leverage this infrastructure to cause trouble. Without tripping any of Twitter’s protective mechanisms, the well-known troll weev managed to use the company’s ad infrastructure to amplify white supremacist ideas to those focused on social justice, causing outrage and anger.

    By and large, these games have been fairly manual attacks of algorithmic systems, but as we all know, that’s been changing. And it’s about to change again.


    Part 2: Vulnerable Training Sets

    Training a machine learning system requires data. Lots of it. While there are some standard corpuses, computer science researchers, startups, and big companies are increasingly hungry for new — and different — data.

    Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study, June 29, 2017

    The first problem is that all data is biased, most notably and recognizably by reflecting the biases of humans and of society in general. Take, for example, the popular ImageNet dataset. Because humans categorize by shape faster than they categorize by color, you end up with some weird artifacts in that data.


    (a) and (c) demonstrate ads for two indvidual’s names, (b) and (d) demonstrate that the advertising was suggesting criminal histories based on name type, not actual records

    Things get even messier when you’re dealing with social prejudices. WhenLatanya Sweeney searched for her name on Google, she was surprised to be given ads inviting her to find out if she had a criminal record. As a curious computer scientist, she decided to run a range of common black and white names through the system to see which ads popped up. Unsurprisingly, onlyblack names produced ads for criminal justice products. This isn’t because Google knowingly treated the names differently, but because searchers were more likely to click on criminal justice ads when searching for black names.Google learned American racism and amplified it back at all of its users.

    Addressing implicit and explicit cultural biases in data is going to be a huge challenge for everyone who is trying to build a system dependent on data classified by or about humans.


    But there’s also a new challenge emerging. The same decentralized networks of people — and state actors — who have been messing with social media and search engines are increasingly eyeing the data that various companies use to train and improve their systems.

    Consider, for example, the role of reddit and Twitter data as training data. Computer scientists have long pulled from the very generous APIs of these companies to train all sorts of models, trying to understand natural language, develop metadata around links, and track social patterns. They’ve trained models to detect depression, rank news, and engage in conversation. Ignoring the fact that this data is not representative in the first place, most engineers who use these APIs believe that it’s possible to clean the data and remove all problematic content. I can promise you it’s not.

    No amount of excluding certain subreddits, removing of categories of tweets, or ignoring content with problematic words will prepare you for those who are hellbent on messing with you.

    I’m watching countless actors experimenting with ways to mess with public data with an eye on major companies’ systems. They are trying to fly below the radar. If you don’t have a structure in place for strategically grappling with how those with an agenda might try to route around your best laid plans, you’re vulnerable. This isn’t about accidental or natural content. It’s not even about culturally biased dataThis is about strategically gamified content injected into systems by people who are trying to guess what you’ll do.


    If you want to grasp what that means, consider the experiment Nicolas Papernot and his colleagues published last year. In order to understand the vulnerabilities of computer vision algorithms, they decided to alter images of stop signs so that they still resembled a stop sign to a human viewer even as the underlying neural network interpreted them as a yield sign. Think about what this means for autonomous vehicles. Will this technology be widely adopted if the classifier can be manipulated so easily?

    Practical Black-Box Attacks against Machine, March 19, 2017. The images in the top row are altered to disrupt the neural network leading to the misinterpretation on the bottom row. The alterations are not visible to the human eye.

    Right now, most successful data-injection attacks on machine learning modelsare happening in the world of research, but more and more, we are seeing people try to mess with mainstream systems. Just because they haven’t been particularly successful yet doesn’t mean that they aren’t learning and evolving their attempts.


    Part 3: Building Technical Antibodies

    Many companies spent decades not taking security vulnerabilities seriously, until breach after breach hit the news. Do we need to go through the same pain before we start building the tools to address this new vulnerability?

    If you are building data-driven systems, you need to start thinking about how that data can be corrupted, by whom, and for what purpose.


    In the tech industry, we have lost the culture of Test. Part of the blame rests on the shoulders of social media. Fifteen years ago, we got the bright idea to shift to a culture of the “perpetual beta.” We invited the public to be our quality assurance engineers. But internal QA wasn’t simply about finding bugs. It was about integrating adversarial thinking into the design and development process. And asking the public to find bugs in our systems doesn’t work well when some of those same people are trying to mess with our systems.Furthermore, there is currently no incentive — or path — for anyone to privately tell us where things go wrong. Only when journalists shame us by finding ways to trick our systems into advertising to neo-Nazis do we pay attention. Yet, far more maliciously intended actors are starting to play the long game in messing with our data. Why aren’t we trying to get ahead of this?


    On the bright side, there’s an emergent world of researchers building adversarial thinking into the advanced development of machine learning systems.

    Consider, for example, the research into generative adversarial networks (or GANs). For those unfamiliar with this line of work, the idea is that you have two unsupervised ML algorithms — one is trying to generate content for the other to evaluate. The first is trying to trick the second into accepting “wrong” information. This work is all about trying to find the boundaries of your model and the latent space of your data. We need to see a lot more R&D work like this — this is the research end of a culture of Test, with true adversarial thinking baked directly into the process of building models.


    White Hat Hackers — those who hack for “the right reasons.” For instance, testing the security or vulnerabilities of a system (Image: CC Magicon, HU)

    But these research efforts are not enough. We need to actively and intentionally build a culture of adversarial testing, auditing, and learning into our development practice. We need to build analytic approaches to assess the biases of any dataset we use. And we need to build tools to monitor how our systems evolve with as much effort as we build our models in the first place.My colleague Matt Goerzen argues that we also need to strategically invite white hat trolls to mess with our systems and help us understand our vulnerabilities.


    The tech industry is no longer the passion play of a bunch of geeks trying to do cool shit in the world. It’s now the foundation of our democracy, economy, and information landscape.

    We no longer have the luxury of only thinking about the world we want to build. We must also strategically think about how others want to manipulate our systems to do harm and cause chaos.

     
  • feedwordpress 15:33:22 on 2017-09-12 Permalink
    Tags: Uncategorized   

    A new iPhone? 

    Dear Dan,
    My husband really wants to get the new iPhone that will come out next week.  He has not seen it and he knows nothing about it, but he already knows that he wants it.  I on the other hand know a few things for sure.  I know that $1,000 is a lot to pay for a slightly newer phone (he has the previous iPhone), I know that he will get used to this new toy very quickly, I know that it will give him less pleasure than he is expecting, and I know that in a few months there will be yet a newer iPhone and that he will want that one as well.  How can I get him to see the mistake he is about to make?
    Sumi
    Dear Sumi,
    Just read your own words: “He really wants…” and now tell me who is about to make a mistake here! Your loving hard working husband who wants to experience first hand the new frontier of technology, or you with your emotion-free approach to his needs and joy?  If I were you I would not only encourage him to buy this new iPhone, I would also make it easier by asking him to give his old iPhone to our daughter and this way making it easer to rationalize.

     


     
  • feedwordpress 14:34:59 on 2017-09-12 Permalink
    Tags: Uncategorized   

    Data & Society’s Next Stage 

    In March 2013, in a flurry of days, I decided to start a research institute. I’d always dreamed of doing so, but it was really my amazing mentor and boss – Jennifer Chayes – who put the fire under my toosh. I’d been driving her crazy about the need to have more people deeply interrogating how data-driven technologies were intersecting with society. Microsoft Research didn’t have the structure to allow me to move fast (and break things). University infrastructure was even slower. There were a few amazing research centers and think tanks, but I wanted to see the efforts scale faster. And I wanted to build the structures to connect research and practices, convene conversations across sectors, and bring together a band of what I loved to call “misfit toys.”  So, with the support of Jennifer and Microsoft, I put pen to paper. And to my surprise, I got the green light to help start a wholly independent research institute.

    I knew nothing about building an organization. I had never managed anyone, didn’t know squat about how to put together a budget, and couldn’t even create a check list of to-dos. So I called up people smarter than I to help learn how other organizations worked and figure out what I should learn to turn a crazy idea into reality. At first, I thought that I should just go and find someone to run the organization, but I was consistently told that I needed to do it myself, to prove that it could work. So I did. It was a crazy adventure. Not only did I learn a lot about fundraising, management, and budgeting, but I also learned all sorts of things about topics I didn’t even know I would learn to understand – architecture, human resources, audits, non-profit law. I screwed up plenty of things along the way, but most people were patient with me and helped me learn from my mistakes. I am forever grateful to all of the funders, organizations, practitioners, and researchers who took a chance on me.

    Still, over the next four years, I never lost that nagging feeling that someone smarter and more capable than me should be running Data & Society. I felt like I was doing the organization a disservice by not focusing on research strategy and public engagement. So when I turned to the board and said, it’s time for an executive director to take over, everyone agreed. We sat down and mapped out what we needed – a strategic and capable leader who’s passionate about building a healthy and sustainable research organization to be impactful in the world. Luckily, we had hired exactly that person to drive program and strategy a year before when I was concerned that I was flailing at managing the fieldbuilding and outreach part of the organization.

    I am overwhelmingly OMG ecstatically bouncing for joy to announce that Janet Haven has agreed to become Data & Society’s first executive director. You can read more about Janet through the formal organizational announcement here.  But since this is my blog and I’m telling my story, what I want to say is more personal. I was truly breaking when we hired Janet. I had taken off more than I could chew. I was hitting rock bottom and trying desperately to put on a strong face to support everyone else. As I see it, Janet came in, took one look at the duct tape upon which I’d built the organization and got to work with steel, concrete, and wood in her hands. She helped me see what could happen if we fixed this and that. And then she started helping me see new pathways for moving forward. Over the last 18 months, I’ve grown increasingly confident that what we’re doing makes sense and that we can build an organization that can last. I’ve also been in awe watching her enable others to shine.

    I’m not leaving Data & Society. To the contrary, I’m actually taking on the role that my title – founder and president – signals. And I’m ecstatic. Over the last 4.5 years, I’ve learned what I’m good at and what I’m not, what excites me and what makes me want to stay in bed. I built Data & Society because I believe that it needs to exist in this world. But I also realize that I’m the classic founder – the crazy visionary that can kickstart insanity but who isn’t necessarily the right person to take an organization to the next stage. Lucky for me, Janet is. And together, I can’t wait to take Data & Society to the next level!

     
  • feedwordpress 02:55:16 on 2017-09-04 Permalink
    Tags: Uncategorized   

    First review of DOLLARS AND SENSE 

    This is a review from Kirkus and they are not easy to please…

     

    DOLLARS AND SENSE
    How We Misthink Money and How to Spend Smarter
    Author: Dan Ariely
    Author: Jeff Kreisler
    Illustrator: Matt Trower

    Review Issue Date: September 15, 2017
    Online Publish Date: September 4, 2017
    Publisher:Harper/HarperCollins

    A lively look at how even the wisest among us are too often fools eager to part with our money.Most of us think about money at least some portion of each day—how to get more of it, how to spend less of it. However, cautions Ariely (Psychology and Behavioral Economics/Duke Univ.; Payoff: The Hidden Logic That Shapes Our Motivations, 2016, etc.), working with comedian and writer Kreisler (Get Rich Cheating, 2009), “when we bring money into the equation, we make the decisions much more difficult and we open ourselves to mistakes.” The better course, they urge, is to consider money not for its own sake—indeed, not to acknowledge its existence at all—but instead to consider the concept of opportunity cost: what do we give up when we make one choice over another? Is the forgone acquisition really the correct one? What if, instead of buying a big-screen TV or new clothes, we thought of what we might do with the hours we don’t have to work in order to procure them or of the other things we might buy in their place? Such counsel comes after consideration of other economic notions, such as the endowment effect, whereby we give more significance to things simply because we own them, and our generally risk-averse economic behavior, whereby the pleasure taken in gaining something is vastly overshadowed by the pain caused by losing it. Ariely and Kreisler, writing breezily but meaningfully, allow that money has its uses as a symbolic system of fungible, storable, accessible value. However, the real consideration should always be that “spending money now on one thing is a trade-off for spending it on something else,” a calculation that is not often reckoned simply because it’s more difficult than fishing out a credit card or some other means of delaying the recognition that spending money now has future, downstream effects. A user-friendly and often entertaining treatise on how to be a more discerning, vastly more aware handler of money.


     
  • feedwordpress 01:50:25 on 2017-08-02 Permalink
    Tags: career, medialab, mit, Uncategorized   

    How “Demo-or-Die” Helped My Career 

    I left the Media Lab 15 years ago this week. At the time, I never would’ve predicted that I learned one of the most useful skills in my career there: demo-or-die.

    (Me debugging an exhibit in 2002)

    The culture of “demo-or-die” has been heavily critiqued over the years. In doing so, most folks focus on the words themselves. Sure, the “or-die” piece is definitely an exaggeration, but the important message there is the notion of pressure. But that’s not what most people focus on. They focus on the notion of a “demo.”

    To the best that anyone can recall, the root of the term stems back from early days at the Media Lab, most likely because of Nicholas Negroponte’s dismissal of “publish-or-perish” in academia. So the idea was to focus not on writing words but producing artifacts. In mocking what it was that the Media Lab produced, many critics focused on the way in which the Lab had a tendency to create vaporware, performed to visitors through the demo. In 1987, Stewart Brand called this “handwaving.” The historian Molly Steenson has a more nuanced view so I can’t wait to read her upcoming book. But the mockery of the notion of a demo hasn’t died. Given this, it’s not surprising that the current Director (Joi Ito) has pushed people to stop talking about demoing and start thinking about deploying. Hence, “deploy-or-die.”

    I would argue that what makes “demo-or-die” so powerful has absolutely nothing to do with the production of a demo. It has to do with the act of doing a demo. And that distinction is important because that’s where the skill development that I relish lies.

    When I was at the Lab, we regularly received an onslaught of visitors. I was a part of the “Sociable Media Group,” run by Judith Donath. From our first day in the group, we were trained to be able to tell the story of the Media Lab, the mission of our group, and the goal of everyone’s research projects. Furthermore, we had to actually demo their quasi functioning code and pray that it wouldn’t fall apart in front of an important visitor. We were each assigned a day where we were “on call” to do demos to any surprise visitor. You could expect to have at least one visitor every day, not to mention hundreds of visitors on days that were officially sanctioned as “Sponsor Days.”

    The motivations and interests of visitors ranged wildly. You’d have tour groups of VIP prospective students, dignitaries from foreign governments, Hollywood types, school teachers, engineers, and a whole host of different corporate actors. If you were lucky, you knew who was visiting ahead of time. But that was rare. Often, someone would walk in the door with someone else from the Lab and introduce you to someone for whom you’d have to drum up a demo in very short order with limited information. You’d have to quickly discern what this visitor was interested in, figure out which of the team’s research projects would be most likely to appeal, determine how to tell the story of that research in a way that connected to the visitor, and be prepared to field any questions that might emerge. And oy vay could the questions run the gamut.

    I *hated* the culture of demo-or-die. I felt like a zoo animal on display for others’ benefit. I hated the emotional work that was needed to manage stupid questions, not to mention the requirement to smile and play nice even when being treated like shit by a visitor. I hated the disruptions and the stressful feeling when a demo collapsed. Drawing on my experience working in fast food, I developed a set of tricks for staying calm. Count how many times a visitor said a certain word. Nod politely while thinking about unicorns. Experiment with the wording of a particular demo to see if I could provoke a reaction. Etc.

    When I left the Media Lab, I was ecstatic to never have to do another demo in my life. Except, that’s the funny thing about learning something important… you realize that you are forever changed by the experience.

    I no longer produce demos, but as I developed in my career, I realized that “demo-or-die” wasn’t really about the demo itself. At the end of the day, the goal wasn’t to pitch the demo — it was to help the visitor change their perspective of the world through the lens of the demo. In trying to shift their thinking, we had to invite them to see the world differently. The demo was a prop. Everything about what I do as a researcher is rooted in the goal of using empirical work to help challenge people’s assumptions and generate new frames that people can work with. I have to understand where they’re coming from, appreciate their perspective, and then strategically engage them to shift their point of view. Like my days at the Media Lab, I don’t always succeed and it is indeed frustrating, especially because I don’t have a prop that I can rely on when everything goes wrong. But spending two years developing that muscle has been so essential for my work as an ethnographer, researcher, and public speaker.

    I get why Joi reframed it as “deploy-or-die.” When it comes to actually building systems, impact is everything. But I really hope that the fundamental practice of “demo-or-die” isn’t gone. Those of us who build systems or generate knowledge day in and day out often have too little experience explaining ourselves to the wide array of folks who showed up to visit the Media Lab. It’s easy to explain what you do to people who share your ideas, values, and goals. It’s a lot harder to explain your contributions to those who live in other worlds. Impact isn’t just about deploying a system; it’s about understanding how that system or idea will be used. And that requires being able to explain your thinking to anyone at any moment. And that’s the skill that I learned from the “demo-or-die” culture.

     
  • feedwordpress 19:55:09 on 2017-07-05 Permalink
    Tags: , harassment, , Uncategorized   

    Tech Culture Can Change 

    We need: Recognition, Repentance, Respect, and Reparation.

    To be honest, what surprises me most about the current conversation about the inhospitable nature of tech for women is that people are surprised. To say that discrimination, harassment, and sexual innuendos are an open secret is an understatement. I don’t know a woman in tech who doesn’t have war stories. Yet, for whatever reason, we are now in a moment where people are paying attention. And for that, I am grateful.

    Like many women in tech, I’ve developed strategies for coping. I’ve had to in order to stay in the field. I’ve tried to be “one of the guys,” pretending to blend into the background as sexist speech was jockeyed about in the hopes that I could just fit in. I’ve tried to be the kid sister, the freaky weirdo, the asexual geek, etc. I’ve even tried to use my sexuality to my advantage in the hopes that maybe I could recover some of the lost opportunity that I faced by being a woman. It took me years to realize that none of these strategies would make me feel like I belonged. Many even made me feel worse.

    For years, I included Ani DiFranco lyrics in every snippet of code I wrote, as well as my signature. I’ve maintained a lyrics site since I was 18 because her words give me strength for coping with the onslaught of commentary and gross behavior. “Self-preservation is a full-time occupation.” I can’t tell you how often I’ve sat in a car during a conference or after a meeting singing along off-key at full volume with tears streaming down my face, just trying to keep my head together.

    What’s at stake is not about a few bad actors. There’s also a range of behaviors getting lumped together, resulting in folks asking if inescapable sexual overtures are really that bad compared to assault. That’s an unproductive conversation because the fundamental problem is the normalization of atrocious behavior that makes room for a wide range of inappropriate actions. Fundamentally, the problem with systemic sexism is that it’s not the individual people who are the problem. It’s the culture. And navigating the culture is exhausting and disheartening. It’s the collection of particles of sand that quickly becomes a mountain that threatens to bury you.

    It’s having to constantly stomach sexist comments with a smile, having to work twice as hard to be heard in a meeting, having to respond to people who ask if you’re on the panel because they needed a woman. It’s about going to conferences where deals are made in the sauna but being told that you have to go to the sauna with “the wives” (a pejoratively constructed use of the word). It’s about people assuming you’re sleeping with whoever said something nice about you. It’s being told “you’re kinda smart for a chick” when you volunteer to help a founder. It’s knowing that you’ll receive sexualized threats for commenting on certain topics as a blogger. It’s giving a talk at a conference and being objectified by the audience. It’s building whisper campaigns among women to indicate which guys to avoid. It’s using Dodgeball/Foursquare to know which parties not to attend based on who has checked in. It’s losing friends because you won’t work with a founder who you watched molest a woman at a party (and then watching Justin Timberlake portray that founder’s behavior as entertainment).

    Lots of people in tech have said completely inappropriate things to women. I also recognize that many of those guys are trying to fit into the sexist norms of tech too, trying to replicate the culture that they see around them because they too are struggling for status. But that’s the problem. Once guys receive power and status within the sector, they don’t drop their inappropriate language. They don’t change their behavior or call out others on how insidious it is. They let the same dynamics fester as though it’s just part of the hazing ritual.

    For women who succeed in tech, the barrage of sexism remains. It just changes shape as we get older.

    On Friday night, after reading the NYTimes article on tech industry harassment, I was deeply sad. Not because the stories were shocking — frankly, those incidents are minor compared to some of what I’ve seen. I was upset because stories like this typically polarize and prompt efforts to focus on individuals rather than the culture. There’s an assumption that these are one-off incidents. They’re not.

    I appreciate that Dave and Chris owned up to their role in contributing to a hostile culture. I know that it’s painful to hear that something you said or did hurt someone else when you didn’t intend that to be the case. I hope that they’re going through a tremendous amount of soul-searching and self-reflection. I appreciate Chris’ willingness to take to Medium to effectively say “I screwed up.” Ideally, they will both come out of this willing to make amends and right their wrongs.

    Unfortunately, most people don’t actually respond productively when they’re called out. Shaming can often backfire.

    One of the reasons that most people don’t speak up is that it’s far more common for guys who are called out on their misdeeds to respond the way that Marc Canter appeared to do, by justifying his behavior and demonizing the woman who accused him of sexualizing her. Given my own experiences with his sexist commentary, I decided to tweet out in solidarity by publicly sharing how he repeatedly asked me for a threesome with his wife early on in my career. At the time, I was young and I was genuinely scared of him; I spent a lot of time and emotional energy avoiding him, and struggled with how to navigate him at various conferences. I wasn’t the only one who faced his lewd comments, often framed as being sex-positive even when they were an abuse of power. My guess is that Marc has no idea how many women he’s made feel uncomfortable, ashamed, and scared. The question is whether or not he will admit that to himself, let alone to others.

    I’m not interested in calling people out for sadistic pleasure. I want to see the change that most women in tech long for. At its core, the tech industry is idealistic and dreamy, imagining innovations that could change the world. Yet, when it comes to self-reflexivity, tech is just as regressive as many other male-dominated sectors. Still, I fully admit that I hold it to a higher standard in no small part because of the widespread commitment in tech to change the world for the better, however flawed that fantastical idealism is.

    Given this, what I want from men in tech boils down to four Rs: Recognition. Repentance. Respect. Reparation.

    Recognition. I want to see everyone — men and women — recognize how contributing to a culture of sexism takes us down an unhealthy path, not only making tech inhospitable for women but also undermining the quality of innovation and enabling the creation of tech that does societal harm. I want men in particular to reflect on how the small things that they do and say that they self-narrate as part of the game can do real and lasting harm, regardless of what they intended or what status level they have within the sector. I want those who witness the misdeeds of others to understand that they’re contributing to the problem.

    Repentance. I want guys in tech — and especially those founders and funders who hold the keys to others’ opportunity — to take a moment and think about those that they’ve hurt in their path to success and actively, intentionally, and voluntarily apologize and ask for forgiveness. I want them to reach out to someone they said something inappropriate to, someone whose life they made difficult and say “I’m sorry.”

    Respect. I want to see a culture of respect actively nurtured and encouraged alongside a culture of competition. Respect requires acknowledging others’ struggles, appreciating each others’ strengths and weaknesses, and helping each other through hard times. Many of the old-timers in tech are nervous that tech culture is being subsumed by financialization. Part of resisting this transformation is putting respect front and center. Long-term success requires thinking holistically about society, not just focusing on current capitalization.

    Reparation. Every guy out there who wants to see tech thrive owes it to the field to actively seek out and mentor, support, fund, open doors for, and otherwise empower women and people of color. No excuses, no self-justifications, no sexualized bullshit. Just behavior change. Plain and simple. If our sector is about placing bets, let’s bet on a better world. And let’s solve for social equity.

    I have a lot of respect for the women who are telling their stories, but we owe it to them to listen to the culture that they’re describing. Sadly, there are so many more stories that are not yet told. I realize that these stories are more powerful when people are named. My only hope is that those who are risking the backlash to name names will not suffer for doing so. Ideally, those who are named will not try to self-justify but acknowledge and accept that they’ve caused pain. I strongly believe that changing the norms is the only path forward. So while I want to see people held accountable, I especially want to see the industry work towards encouraging and supporting behavior change. At the end of the day, we will not solve the systemic culture of sexism by trying to weed out bad people, but we can work towards rendering bad behavior permanently unacceptable.

     
  • feedwordpress 12:00:21 on 2017-06-26 Permalink
    Tags: , , , Uncategorized   

    Beginning at the End 

    Part of the CAH Startup Lab Experimenting in Business Series

    By Rachael Meleney and Aline Holzwarth

    Missteps in business are costly—they drain time, energy, and money. Of course, business leaders never start a project with the intention to fail—whether it’s implementing a new program, launching a new technology, or trying a new marketing campaign. Yet, new ventures are at risk of floundering if not properly approached—that is, making evidence-based decisions rather than relying on intuition.

    Let’s say your company is in the business of connecting consumers to savings accounts, helping them save for retirement through your app. You need to decide how your product will achieve this. Let’s look at how an intuition-backed approach (Company A) compares to a research-backed approach (Company B) in this scenario:

    Screen Shot 2017-06-17 at 11.43.54 AM

    What if there was a way to more reliably ensure that business risks weren’t as prone to failure? As we see in the example above, the solution lies in well-planned experimentation.

    If businesses can learn to identify concrete decisions needed to move new or existing projects forward, and set up experiments that directly inform those decisions, then much of the painful time, energy, and monetary costs of mistakes can be avoided. However, many business leaders and entrepreneurs are weary and unsure about how to leverage research to benefit their companies most effectively. Therefore, they rely (perhaps unknowingly) on riskier decision-making approaches.

    As social scientists at Dan Ariely’s Center for Advanced Hindsight at Duke University, we’re in the business of human behavior and decision-making. We see in our research the effects that our biases and environments have on our ability to make optimal decisions. In the high-risk environment of building a company, founders aren’t well-served by calling shots based on gut feelings or reasoning plagued by cognitive biases. (Don’t feel bad! We’re only human!)

    The better route? Rigorous experimentation. Asking well-formed research questions, designing tests with isolated variables and control groups, randomizing users to groups, and using data to inform business decisions.

    Adapting the Process: Making Experimental Results Actionable

    At the Center for Advanced Hindsight’s Startup Lab (our academic incubator for health and finance technologies), our mission is to equip startups with the tools to make business decisions firmly grounded in research.

    But the research process has to be more accessible to businesses. Entrepreneurs often come to us excited about research, but with little to no idea of what it takes to execute a rigorous experiment. There’s a lot of anguish, confusion, and hesitation about where to even begin.

    The Startup Lab makes experimentation more approachable to entrepreneurs who have the will, but often not the time and resources to run studies like our colleagues in academia.

    There are specific considerations that businesses take into account when wading into the world of research. An important one is what makes investing in experimentation worthwhile? The driving purpose of running experiments, in most business contexts, is to uncover results that are clearly actionable.

    So how do you ensure actionable results? You set up your process with this goal in mind from the start – not as an afterthought. The Startup Lab adapts Alan R. Andreasen’s concept of “backward market research”[1] to bring the process of planning and executing a research project to entrepreneurs.

    The ‘Backward’ Approach: Beginning at the End

    “Beginning at the end” means that you determine what decision you’ll make when you know the results of your research, first, and let that dictate what data you need to collect and what your results need to look like in order to make that decision.

    This ‘backward’ planning is not how businesses usually approach research projects. The typical approach to research is to start with a problem. In business, this often leads to identifying a lot of vague unknowns—a “broad area of ignorance” as Andreasen calls it—and leaves a loosely defined goal of simply reducing ignorance. For example, startups often come to us with the goal of better understanding their customers. While we commend this noble goal, we ask, “to what end?”

    What business decision will you make based on what your research uncovers?

    The problem with the simple exploratory approach is that it sets you up for certain failure from the start. An unclear question produces an unclear answer. You’ll end up with data that you can’t possibly base a concrete decision on.

    Let’s revisit the example of Company A (the intuition-based entrepreneurs) and Company B (their ‘backward market research’ counterparts). If you approach product development based on intuition like Company A, then you may be tempted to ask your customers about the challenges they have saving, or their ideal income at retirement – but this would be misguided if you don’t first determine what you will do with this information. If you find that your customers have trouble saving, you might conclude that you need to give them more information about the benefits of saving for retirement. If you create and implement this content into your app only to discover that it is completely ineffective at increasing savings, will you trash your product and start over? (Probably not. But if you set up your research in this way, then that may be the only logical conclusion.)

    BMR image

    But imagine that instead, like Company B, you plan a research project to collect data on your users’ saving behavior (instead of probing to reduce your ignorance on their stated savings challenges). You set up an experiment to test two different ways of encouraging users to save (your two treatment groups, reminders and social accountability) against the current version of your product (what we call a control group).

    Now you’ve set up an experiment, but you can’t stop at designing your research.

    The “backward market research” approach forces you to specify which decisions you will make based on the outcome.

    If the social accountability version leads your users to save twice as much as they do when using your base product, will you implement this mechanism in your product strategy? How will you do so, and what might the implications be? Once you determine 1.) The decision you will make from every possible outcome of your research results and 2.) How each decision will be implemented, then your research is set up to lead to actionable insights that have the power to move your business forward.

    P.S. You too can design and conduct experiments using the ‘backward market research’ method. Use our handy tool to guide you through the backward approach: ‘Beginning at the End’ Worksheet.

    And remember, to orient your research toward actionable decisions, start with the end in mind.

    [1] Andreasen, A. R. (1985). ‘Backward’Market Research. Harvard Business Review, 63(3), 176-182.


    At the Center for Advanced Hindsight’s Startup Lab, our academic incubator for health and finance tech solutions, we aim to instill a commitment to research-backed business decisions in the companies we bring into our fold. We will be releasing more articles and tools like this as part of our Experimenting in Business Series on our blog. Leveraging research for smart business decisions is a powerful skill—we’re aiming to make rigorous experimenting less intimidating and more accessible to a broad range of businesses.

    By the way—we’re also accepting applications for the Startup Lab’s upcoming program, which starts in October. We’re looking for startups that are eager to experiment, and demonstrate a passion for building research-backed solutions to health and finance challenges.

    APPLY NOW.

    You only have until June 30th at 5pm EST.


     
  • feedwordpress 12:00:24 on 2017-06-18 Permalink
    Tags: , , , Uncategorized   

    Build better health and finance tech products for humans. Join the Startup Lab. 

    We’re excited to announce that we’re searching for our next class of the Startup Lab, which begins October 2017. Applications are only open until June 30th at 5pm EST.

    Apply to the Startup Lab now.

    Our academic incubator supports problem-solvers by making behavioral economics findings accessible and applicable. See how behavioral researchers and entrepreneurs work together at the Center for Advanced Hindsight:

    The Startup Lab provides:

    • Ability to explore behavioral economics and learn how to leverage findings for your startup
    • Opportunity to collaborate with world-renowned behavioral researchers
    • Guidance and resources to run rigorous experiments
    • Office space in downtown Durham, NC up to 9 months (October-June)
    • Investment up to $60k

    Are you insatiably curious about what drives decisions, shapes motivation, and influences behavior? Does your startup’s success hinge on the ability to affect positive behavior change that helps people live happier, healthier, and wealthier lives?

    We’re looking for startups that are eager to experiment, and demonstrate a passion for building research-backed solutions to health and finance challenges.

    APPLY NOW.

    You only have until June 30th at 5pm EST.

     

    To learn more, visit our FAQs (includes information on the Startup Lab investment structure and other logistics) or connect with us at startup@danariely.com.

     

     

     


     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
esc
cancel