• Tools
  • Showing posts with label RSS Feed. Show all posts
    Showing posts with label RSS Feed. Show all posts

    Senior US election official: Trump’s misinformation is “insulting”

    0

    On Thursday, President Donald Trump sent an all-caps tweet claiming that voting machines from a company called Dominion Voting Systems deleted millions of votes for him around the country. The claim isn’t true, but he is the president—so it has had an impact. Election workers say they fear for their safety. They’re receiving death threats from supporters of the president. 

    Ben Hovland knows voting machines well. He runs the Election Assistance Commission (EAC), an independent federal agency that, among other jobs, tests and certifies voting machines. The EAC writes voting systems standards and tests the machines in labs for security, useability, and safety. And Hovland says there has been no widespread fraud or malfunction that would change the result of the election. Nor has the president—or the lawyers who have unsuccessfully tried challenging the result— produced any actual evidence of Trump’s claims.

    Hovland and I discussed what’s happened since the election, and the extraordinary amount of disinformation coming from the White House. During our conversation, which has been edited for length and clarity, Hovland talked about the president’s legal woes, the future of election security officials, and his message for Donald Trump.

    Q: What’s your reaction when the president tweets that Dominion deleted 2.7 million Trump votes?

    A: Number one, it’s pretty baffling. Number two, I just wish that if claims like that were going to be made, they would actually be backed up with something credible. I think those types of statements matter. They cause Americans to lose confidence in the process.

    That’s really concerning. Look at the president’s litigation. What we see is a very different story in front of a microphone or on Twitter than we see in front of a courtroom or in front of a judge. We see bold statements on Twitter or at the podium and we see hearsay and we see laughable evidence presented to courts. There’s just not a correlation between those. 

    This story isn’t new. You look back at the 2016 election, the president made claims that he lost the popular vote because allegedly millions of non-citizens voted. A presidential commission was created to find those millions of non-citizens and prove voter fraud. They didn’t. It was disbanded in embarrassment. We see that time and time again. There has been no evidence anywhere of widespread voter fraud.

    Frankly, it’s disrespectful to the people who run elections, it’s disrespectful to their integrity to make these kinds of allegations, particularly, when you’re not providing evidence. Anything that has been brought up as an easily refuted because it’s largely conspiracy theories. If there is anything to this, election officials will want to get to the bottom of it more than anyone. They care about the integrity of the process and want to make sure that it was fair and the will of the voters is reflected. 

    Q: It was recently reported that Chris Krebs, director of the Cybersecurity and Infrastructure Security Agency, is being pressured by the White House to change its Rumor Control page, which combats election misinformation in real time. Krebs now expects to be fired because he refuses to change the facts. What’s your reaction to seeing a well-respected election security official feeling that he’s got a sword over his head for the act of getting the facts out?

    A: That alone tells you as much as anything I can say. The reality is Krebs has done a great job. Without his leadership, we would be nowhere close to where we are. I’ve said many times there’s been a sea change in information sharing between state, local, and federal partners on election security. So much of that credit goes to director Krebs and his leadership. 

    Rumor Control has been a fantastic resource. We really have seen an absurd number baseless allegations made. None has been rooted in any real fact. It’s important to get the real story out there. Director Krebs has done a great job of empowering his staff and meeting election officials where they are, bipartisan and across the board, recognizing that our elections are decentralized. Each state runs elections in its own unique way. And that means you need to approach the space respecting that and knowing that it’s different states and different election officials will have different challenges and need different assistance.

    He’s done a great job recognizing that and adapting the program. The election infrastructure sub-sector has been the fastest growing sub-sector that the government has ever stood up. Certainly it’s led to the most secure election we’ve ever had. 

    I was at the CISA operation center on Election Day and between there and having representatives from election organizations, having representatives there from the manufacturer community, the intelligence community, and having election officials around the country in virtual rooms, we were able to have a level of visibility into what was happening across the country like we never had before. 

    Look at things that popped up on Election Day. No Election Day is perfect, elections never are, but this was done really well. And the things that popped up were sort of common election problems. There were some machines that didn’t start. There were some issues with the e-poll books. There were some poll workers that didn’t show up, that happens. But we were able to see those pop up and quickly address them. There were regular press background briefings giving the basic ‘here’s what we’re seeing, here’s what we know.’ Before the e-poll book issue spun up into some grand conspiracy, the facts were ascertained, shared, and we knew it was localized, and being resolved and not a major cyber incident. 

    The ability to have that visibility  to be able to keep things from snowballing also made a big difference this year. And so much of that is due to the work that director Krebs has done and his leadership in the space. I hope that he continues in the role for as long as he wants.

    Q: Are you worried about further politicization of the election process? 

    A: I certainly hope that doesn’t happen. What you’re seeing in Rumor Control and in so many of these efforts is a commitment to the oath that we swore to the Constitution.

    It’s trying to get the truth out about how our elections run. The security and integrity of the election, the story of what this election was the will of the voters, a record number of Americans cast ballots this year. Ultimately that is our democracy. And you’ve got to respect the will of the voters. 

    Q: Do you think the situation is exacerbated by the fact that it’s specifically the president who is putting a megaphone to this misinformation? 

    A: I think that is alarming, particularly the press conference so many networks cut away from. I think most Americans are not accustomed to seeing the presidential seal at the White House at the podium and make accusations like that, that his lawyers and others have failed to come up with any actual evidence or proof. 

    A lot of Americans listen to the president. They respect the office or they are supporters of the president. You saw in some ways how that played out in people’s usage of mail-in absentee ballots.  Some people have raised questions about how the percentage of absentee mail going to president-elect Biden was overwhelming. Well, that’s because the president spent months saying you couldn’t trust mail-in ballots.

    Certainly there’s a portion of the American people that believe him, and that is very concerning because we had a free and fair election. The will of the people, they have made their voices heard and election officials have just put in an unbelievable amount of work to ensure that this was a smooth election and the election has integrity. 

    Any claims otherwise just are sowing divisiveness amongst the American people. That is what our foreign adversaries want. They want to see these divides. They want to see us lose faith in our democratic process and systems. It’s really unfortunate to be doing anything that would cause Americans to lose faith in the process, particularly one that worked so well this year. 

    Q: If you could talk face-to-face with President Trump today about this election, what message would you deliver?

    A: More than anything, I would talk about the consequences of these statements for election officials. I’ve heard from election officials personally. I’ve seen them in the media concerned about their own safety, the safety of their staff. These accusations, these conspiracy theories that are flying around, have consequences.

    At a minimum, it’s insulting to the professionals that run our elections and hopefully that’s the worst that comes of it. Our people, they’re doing their jobs but they don’t feel safe doing it. That is a tragedy. That is awful. These are public servants. This isn’t a job you do for glory or to get rich.

    It’s the job you do because you believe in our country, you believe in our democracy, and you want to help Americans. I can think of few callings that are higher. And I think it’s just really unfortunate that in a year we should be singing their praises and giving them credit, instead, we’re talking about them receiving threats and being scared. That is unacceptable. 



    from MIT Technology Review https://ift.tt/2JYB1ha
    via IFTTT

    On Thursday, President Donald Trump sent an all-caps tweet claiming that voting machines from a company called Dominion Voting Systems deleted millions of votes for him around the country. The claim isn’t true, but he is the president—so it has had an impact. Election workers say they fear for their safety. They’re receiving death threats from supporters of the president. 

    Ben Hovland knows voting machines well. He runs the Election Assistance Commission (EAC), an independent federal agency that, among other jobs, tests and certifies voting machines. The EAC writes voting systems standards and tests the machines in labs for security, useability, and safety. And Hovland says there has been no widespread fraud or malfunction that would change the result of the election. Nor has the president—or the lawyers who have unsuccessfully tried challenging the result— produced any actual evidence of Trump’s claims.

    Hovland and I discussed what’s happened since the election, and the extraordinary amount of disinformation coming from the White House. During our conversation, which has been edited for length and clarity, Hovland talked about the president’s legal woes, the future of election security officials, and his message for Donald Trump.

    Q: What’s your reaction when the president tweets that Dominion deleted 2.7 million Trump votes?

    A: Number one, it’s pretty baffling. Number two, I just wish that if claims like that were going to be made, they would actually be backed up with something credible. I think those types of statements matter. They cause Americans to lose confidence in the process.

    That’s really concerning. Look at the president’s litigation. What we see is a very different story in front of a microphone or on Twitter than we see in front of a courtroom or in front of a judge. We see bold statements on Twitter or at the podium and we see hearsay and we see laughable evidence presented to courts. There’s just not a correlation between those. 

    This story isn’t new. You look back at the 2016 election, the president made claims that he lost the popular vote because allegedly millions of non-citizens voted. A presidential commission was created to find those millions of non-citizens and prove voter fraud. They didn’t. It was disbanded in embarrassment. We see that time and time again. There has been no evidence anywhere of widespread voter fraud.

    Frankly, it’s disrespectful to the people who run elections, it’s disrespectful to their integrity to make these kinds of allegations, particularly, when you’re not providing evidence. Anything that has been brought up as an easily refuted because it’s largely conspiracy theories. If there is anything to this, election officials will want to get to the bottom of it more than anyone. They care about the integrity of the process and want to make sure that it was fair and the will of the voters is reflected. 

    Q: It was recently reported that Chris Krebs, director of the Cybersecurity and Infrastructure Security Agency, is being pressured by the White House to change its Rumor Control page, which combats election misinformation in real time. Krebs now expects to be fired because he refuses to change the facts. What’s your reaction to seeing a well-respected election security official feeling that he’s got a sword over his head for the act of getting the facts out?

    A: That alone tells you as much as anything I can say. The reality is Krebs has done a great job. Without his leadership, we would be nowhere close to where we are. I’ve said many times there’s been a sea change in information sharing between state, local, and federal partners on election security. So much of that credit goes to director Krebs and his leadership. 

    Rumor Control has been a fantastic resource. We really have seen an absurd number baseless allegations made. None has been rooted in any real fact. It’s important to get the real story out there. Director Krebs has done a great job of empowering his staff and meeting election officials where they are, bipartisan and across the board, recognizing that our elections are decentralized. Each state runs elections in its own unique way. And that means you need to approach the space respecting that and knowing that it’s different states and different election officials will have different challenges and need different assistance.

    He’s done a great job recognizing that and adapting the program. The election infrastructure sub-sector has been the fastest growing sub-sector that the government has ever stood up. Certainly it’s led to the most secure election we’ve ever had. 

    I was at the CISA operation center on Election Day and between there and having representatives from election organizations, having representatives there from the manufacturer community, the intelligence community, and having election officials around the country in virtual rooms, we were able to have a level of visibility into what was happening across the country like we never had before. 

    Look at things that popped up on Election Day. No Election Day is perfect, elections never are, but this was done really well. And the things that popped up were sort of common election problems. There were some machines that didn’t start. There were some issues with the e-poll books. There were some poll workers that didn’t show up, that happens. But we were able to see those pop up and quickly address them. There were regular press background briefings giving the basic ‘here’s what we’re seeing, here’s what we know.’ Before the e-poll book issue spun up into some grand conspiracy, the facts were ascertained, shared, and we knew it was localized, and being resolved and not a major cyber incident. 

    The ability to have that visibility  to be able to keep things from snowballing also made a big difference this year. And so much of that is due to the work that director Krebs has done and his leadership in the space. I hope that he continues in the role for as long as he wants.

    Q: Are you worried about further politicization of the election process? 

    A: I certainly hope that doesn’t happen. What you’re seeing in Rumor Control and in so many of these efforts is a commitment to the oath that we swore to the Constitution.

    It’s trying to get the truth out about how our elections run. The security and integrity of the election, the story of what this election was the will of the voters, a record number of Americans cast ballots this year. Ultimately that is our democracy. And you’ve got to respect the will of the voters. 

    Q: Do you think the situation is exacerbated by the fact that it’s specifically the president who is putting a megaphone to this misinformation? 

    A: I think that is alarming, particularly the press conference so many networks cut away from. I think most Americans are not accustomed to seeing the presidential seal at the White House at the podium and make accusations like that, that his lawyers and others have failed to come up with any actual evidence or proof. 

    A lot of Americans listen to the president. They respect the office or they are supporters of the president. You saw in some ways how that played out in people’s usage of mail-in absentee ballots.  Some people have raised questions about how the percentage of absentee mail going to president-elect Biden was overwhelming. Well, that’s because the president spent months saying you couldn’t trust mail-in ballots.

    Certainly there’s a portion of the American people that believe him, and that is very concerning because we had a free and fair election. The will of the people, they have made their voices heard and election officials have just put in an unbelievable amount of work to ensure that this was a smooth election and the election has integrity. 

    Any claims otherwise just are sowing divisiveness amongst the American people. That is what our foreign adversaries want. They want to see these divides. They want to see us lose faith in our democratic process and systems. It’s really unfortunate to be doing anything that would cause Americans to lose faith in the process, particularly one that worked so well this year. 

    Q: If you could talk face-to-face with President Trump today about this election, what message would you deliver?

    A: More than anything, I would talk about the consequences of these statements for election officials. I’ve heard from election officials personally. I’ve seen them in the media concerned about their own safety, the safety of their staff. These accusations, these conspiracy theories that are flying around, have consequences.

    At a minimum, it’s insulting to the professionals that run our elections and hopefully that’s the worst that comes of it. Our people, they’re doing their jobs but they don’t feel safe doing it. That is a tragedy. That is awful. These are public servants. This isn’t a job you do for glory or to get rich.

    It’s the job you do because you believe in our country, you believe in our democracy, and you want to help Americans. I can think of few callings that are higher. And I think it’s just really unfortunate that in a year we should be singing their praises and giving them credit, instead, we’re talking about them receiving threats and being scared. That is unacceptable. 



    from MIT Technology Review https://ift.tt/2JYB1ha
    via IFTTT

    The key to smarter robot collaborators may be more simplicity

    0

    Think of all the subconscious processes you perform while you’re driving. As you take in information about the surrounding vehicles, you’re anticipating how they might move and thinking on the fly about how you’d respond to those maneuvers. You may even be thinking about how you might influence the other drivers based on what they think you might do.

    If robots are to integrate seamlessly into our world, they’ll have to do the same. Now researchers from Stanford University and Virginia Tech have proposed a new technique to help robots perform this kind of behavioral modeling, which they will present at the annual international Conference on Robot Learning next week. It involves the robot summarizing only the broad strokes of other agents’ motions rather than capturing them in precise detail. This allows it to nimbly predict their future actions and its own responses without getting bogged down by heavy computation.

    A different theory of mind

    Traditional methods for helping robots work alongside humans take inspiration from an idea in psychology called theory of mind. It suggests that people engage and empathize with one another by developing an understanding of one another’s beliefs—a skill we develop when we’re young children. Researchers who draw upon this theory focus on getting robots to construct a model of their collaborators’ underlying intent as the basis for predicting their actions.

    Dorsa Sadigh, an assistant professor at Stanford, thinks this is inefficient. “If you think about human-human interactions, we don’t really do that,” she says. “If we’re trying to move a table together, we don’t do belief modeling.” Instead, she says, two people moving a table rely on simple signals like the forces they feel from their collaborator pushing or pulling the table. “So I think what is really happening is that when humans are doing a task together, they keep track of something that’s much lower dimensional.”

    Based on this idea, the robot stores very simple descriptions of its surrounding agents’ actions. In a game of air hockey, for example, it might store its opponents’ movements with only one word: “right,” “left,” or “center.” It then uses this data to train two separate algorithms: the first, a machine-learning algorithm that predicts where the opponent will move next; the second, a reinforcement-learning algorithm to determine how it should respond. The latter algorithm also keeps track of how the opponent changes tack based on its own response, so it can learn to influence the opponent’s actions.

    The key idea here is the lightweight nature of the training data, which is what allows the robot to perform all this parallel training on the fly. A more traditional approach might store the xyz-coordinates of the entire pathway of the opponent’s movements, not just their overarching direction. While it may seem counterintuitive that less is more, it’s worth remembering again Sadigh’s theory on human interaction. We, too, only model the people around us in broad strokes, rather than calculating their exact coordinates.

    The researchers tested this idea in simulation, including for a self-driving car application, and in the real world with a game of robot air hockey. In each of the trials, the new technique outperformed previous methods for teaching robots to adapt to surrounding agents. The robot also effectively learned to influence those around it.

    Future work

    There are still some caveats that future research will have to resolve. The work currently assumes, for example, that every interaction that the robot engages in is finite, says Jakob Foerster, an assistant professor at the University of Toronto, who was not involved in the work. 

    In the self-driving car simulation, the researchers assumed that the robot car was only experiencing one clearly bounded interaction with another car during each round of training. But driving, of course, doesn’t work like that. Interactions are often continuous and would require a self-driving car to learn and adapt its behavior within each interaction, not just between them.

    Another challenge, Sadigh says, is that the approach assumes knowledge of the best way to describe a collaborators’ behavior. The researchers themselves had to come up with the labels “right,” “left,” and “center” in the air hockey game for the robot to describe its opponent’s actions. Those labels won’t always be so obvious in more complicated interactions.

    Nonetheless Foerster sees promise in the paper’s contribution. “Bridging the gap between multi-agent learning and human-AI interaction is a super important avenue for future research,” he says. “I’m really excited for when these things get put together.”



    from MIT Technology Review https://ift.tt/38HznuW
    via IFTTT

    Think of all the subconscious processes you perform while you’re driving. As you take in information about the surrounding vehicles, you’re anticipating how they might move and thinking on the fly about how you’d respond to those maneuvers. You may even be thinking about how you might influence the other drivers based on what they think you might do.

    If robots are to integrate seamlessly into our world, they’ll have to do the same. Now researchers from Stanford University and Virginia Tech have proposed a new technique to help robots perform this kind of behavioral modeling, which they will present at the annual international Conference on Robot Learning next week. It involves the robot summarizing only the broad strokes of other agents’ motions rather than capturing them in precise detail. This allows it to nimbly predict their future actions and its own responses without getting bogged down by heavy computation.

    A different theory of mind

    Traditional methods for helping robots work alongside humans take inspiration from an idea in psychology called theory of mind. It suggests that people engage and empathize with one another by developing an understanding of one another’s beliefs—a skill we develop when we’re young children. Researchers who draw upon this theory focus on getting robots to construct a model of their collaborators’ underlying intent as the basis for predicting their actions.

    Dorsa Sadigh, an assistant professor at Stanford, thinks this is inefficient. “If you think about human-human interactions, we don’t really do that,” she says. “If we’re trying to move a table together, we don’t do belief modeling.” Instead, she says, two people moving a table rely on simple signals like the forces they feel from their collaborator pushing or pulling the table. “So I think what is really happening is that when humans are doing a task together, they keep track of something that’s much lower dimensional.”

    Based on this idea, the robot stores very simple descriptions of its surrounding agents’ actions. In a game of air hockey, for example, it might store its opponents’ movements with only one word: “right,” “left,” or “center.” It then uses this data to train two separate algorithms: the first, a machine-learning algorithm that predicts where the opponent will move next; the second, a reinforcement-learning algorithm to determine how it should respond. The latter algorithm also keeps track of how the opponent changes tack based on its own response, so it can learn to influence the opponent’s actions.

    The key idea here is the lightweight nature of the training data, which is what allows the robot to perform all this parallel training on the fly. A more traditional approach might store the xyz-coordinates of the entire pathway of the opponent’s movements, not just their overarching direction. While it may seem counterintuitive that less is more, it’s worth remembering again Sadigh’s theory on human interaction. We, too, only model the people around us in broad strokes, rather than calculating their exact coordinates.

    The researchers tested this idea in simulation, including for a self-driving car application, and in the real world with a game of robot air hockey. In each of the trials, the new technique outperformed previous methods for teaching robots to adapt to surrounding agents. The robot also effectively learned to influence those around it.

    Future work

    There are still some caveats that future research will have to resolve. The work currently assumes, for example, that every interaction that the robot engages in is finite, says Jakob Foerster, an assistant professor at the University of Toronto, who was not involved in the work. 

    In the self-driving car simulation, the researchers assumed that the robot car was only experiencing one clearly bounded interaction with another car during each round of training. But driving, of course, doesn’t work like that. Interactions are often continuous and would require a self-driving car to learn and adapt its behavior within each interaction, not just between them.

    Another challenge, Sadigh says, is that the approach assumes knowledge of the best way to describe a collaborators’ behavior. The researchers themselves had to come up with the labels “right,” “left,” and “center” in the air hockey game for the robot to describe its opponent’s actions. Those labels won’t always be so obvious in more complicated interactions.

    Nonetheless Foerster sees promise in the paper’s contribution. “Bridging the gap between multi-agent learning and human-AI interaction is a super important avenue for future research,” he says. “I’m really excited for when these things get put together.”



    from MIT Technology Review https://ift.tt/38HznuW
    via IFTTT

    Covid-19 vaccines shouldn’t get emergency-use authorization

    0

    I really want a covid-19 vaccine. Like many Americans, I have family members and neighbors who have been sickened and killed by the new coronavirus. My sister is a nurse on a covid-19 ward, and I want her to be able to do her job safely. As a health-care lawyer, I have the utmost confidence in the career scientists at the US Food and Drug Administration who would ultimately determine whether to issue an emergency-use authorization for a covid-19 vaccine. But I am deeply worried about what could happen if they do. 

    The pace of covid-19 vaccine research has been astonishing: there are more than 200 vaccine candidates in some stage of development, including several that are already in phase 3 clinical trials, mere months after covid-19 became a global public health emergency. In order for the FDA to approve a vaccine, however, not only do these clinical trials need to be completed—a process that typically involves following tens of thousands of participants for at least six months—but the agency also needs to inspect production facilities, review detailed manufacturing plans and data about the product’s stability, and pore over reams of trial data. This review can easily take a year or more.

    That’s why, for several months now, the FDA has been considering criteria for initially deploying a covid-19 vaccine under an emergency-use authorization, or EUA, before the FDA has all the information normally required for full approval. At least a few of the manufacturers currently in phase 3 trials have publicly stated their intent to request an EUA. Pfizer plans to do so later this month in light of the exciting preliminary results for its vaccine.

    EUAs allow the FDA to make unapproved products available during public health emergencies. While the FDA has issued EUAs sparingly for diagnostics and therapies aimed at other infectious diseases, such as H1N1 and Zika, a vaccine has never been used in civilians under an EUA. Vaccines are different from other medical products in that they are deployed broadly and in healthy people, so the bar for approving one is high.

    The FDA’s Vaccines and Related Biological Products Advisory Committee, a group of outside experts who advise the FDA on vaccines, met for the first time to discuss covid-19 vaccines on October 22. Some committee members questioned whether the FDA had set the bar for a vaccine EUA high enough. Members also expressed several important concerns about authorizing a vaccine through an EUA.

    One concern is that once a vaccine is authorized in this manner, it may be difficult—for ethical and practical reasons—to complete clinical trials involving that vaccine (and thus to collect additional safety data and population-specific data for groups disproportionately affected by covid-19). It could also hamper scientists’ ability to study other covid-19 vaccine candidates that may be “better” in various ways than the first across the finish line.

    But the most important consideration in my view relates to public trust.

    Public health experts caution that vaccines don’t protect people; only vaccinations do. A vaccine that hasn’t gained enough public trust will therefore have a limited ability to control the pandemic even if it’s highly effective.

    Data from the Pew Research Center show declining trust in a covid-19 vaccine across all genders, racial and ethnic categories, ages, and education levels, with many people citing safety and the pace of approval as key factors in their skepticism. Information presented to the advisory committee by the Reagan-Udall Foundation similarly showed significant distrust in the speed of vaccine development, likely exacerbated by recent political interference with the FDA and the US Centers for Disease Control and Prevention (CDC) and some politicians’ promises that a vaccine would be available before the end of the year. People of color have expressed additional concerns with vaccine research.

    Judging from their written and verbal comments to the advisory committee, major vaccine manufacturers recognize the potential disruptions to subsequent clinical trials and are seeking the FDA’s advice to address them. While those considerations are daunting, I suspect that manufacturers and the FDA could create workable responses. But even then, the public trust issues associated with EUAs—which most of the public first heard about through the hydroxychloroquine debacle and again in the context of the convalescent plasma controversy—still make this tool a poor fit for vaccines.

    Instead, if vaccine trial data are promising enough to warrant giving some people pre-approval access to a covid-19 vaccine, the FDA should do so using a mechanism called “expanded access.” While the FDA ordinarily uses expanded access to make experimental treatments available to sick patients who have no alternative treatment available, it has been used for vaccines before and could be used now to avoid disrupting ongoing clinical trials or fostering public perceptions that a vaccine was being rushed because of an “emergency.” Expanded-access programs are also overseen by ethics committees and have informed consent requirements for patients that go beyond those associated with products authorized by EUA.

    Not only must the public trust a covid-19 vaccine enough to seek out the first wave of authorized vaccines, but that trust must be resilient enough to withstand potential setbacks: protection below 100% (and perhaps below 50%), significant side effects (or rumors of them), and possible recalls. That level of trust takes time to rebuild if it has been eroded. And the stakes here are not just the slowing of this pandemic. As former senior health official Andy Slavitt recently said, “Done right, vaccines end pandemics. Done wrong, pandemics end vaccines.”

    Clint Hermes, a former academic medical center general counsel, has advised universities, teaching hospitals, and life sciences companies on global health problems. He has helped set up vaccination, treatment, and surveillance projects for infectious diseases in North and South America, Africa, Asia, and the Middle East. The views expressed here are his own and not those of any organization with which he is affiliated, including his employer. The information presented here should not be construed as legal advice.



    from MIT Technology Review https://ift.tt/38NckP0
    via IFTTT

    I really want a covid-19 vaccine. Like many Americans, I have family members and neighbors who have been sickened and killed by the new coronavirus. My sister is a nurse on a covid-19 ward, and I want her to be able to do her job safely. As a health-care lawyer, I have the utmost confidence in the career scientists at the US Food and Drug Administration who would ultimately determine whether to issue an emergency-use authorization for a covid-19 vaccine. But I am deeply worried about what could happen if they do. 

    The pace of covid-19 vaccine research has been astonishing: there are more than 200 vaccine candidates in some stage of development, including several that are already in phase 3 clinical trials, mere months after covid-19 became a global public health emergency. In order for the FDA to approve a vaccine, however, not only do these clinical trials need to be completed—a process that typically involves following tens of thousands of participants for at least six months—but the agency also needs to inspect production facilities, review detailed manufacturing plans and data about the product’s stability, and pore over reams of trial data. This review can easily take a year or more.

    That’s why, for several months now, the FDA has been considering criteria for initially deploying a covid-19 vaccine under an emergency-use authorization, or EUA, before the FDA has all the information normally required for full approval. At least a few of the manufacturers currently in phase 3 trials have publicly stated their intent to request an EUA. Pfizer plans to do so later this month in light of the exciting preliminary results for its vaccine.

    EUAs allow the FDA to make unapproved products available during public health emergencies. While the FDA has issued EUAs sparingly for diagnostics and therapies aimed at other infectious diseases, such as H1N1 and Zika, a vaccine has never been used in civilians under an EUA. Vaccines are different from other medical products in that they are deployed broadly and in healthy people, so the bar for approving one is high.

    The FDA’s Vaccines and Related Biological Products Advisory Committee, a group of outside experts who advise the FDA on vaccines, met for the first time to discuss covid-19 vaccines on October 22. Some committee members questioned whether the FDA had set the bar for a vaccine EUA high enough. Members also expressed several important concerns about authorizing a vaccine through an EUA.

    One concern is that once a vaccine is authorized in this manner, it may be difficult—for ethical and practical reasons—to complete clinical trials involving that vaccine (and thus to collect additional safety data and population-specific data for groups disproportionately affected by covid-19). It could also hamper scientists’ ability to study other covid-19 vaccine candidates that may be “better” in various ways than the first across the finish line.

    But the most important consideration in my view relates to public trust.

    Public health experts caution that vaccines don’t protect people; only vaccinations do. A vaccine that hasn’t gained enough public trust will therefore have a limited ability to control the pandemic even if it’s highly effective.

    Data from the Pew Research Center show declining trust in a covid-19 vaccine across all genders, racial and ethnic categories, ages, and education levels, with many people citing safety and the pace of approval as key factors in their skepticism. Information presented to the advisory committee by the Reagan-Udall Foundation similarly showed significant distrust in the speed of vaccine development, likely exacerbated by recent political interference with the FDA and the US Centers for Disease Control and Prevention (CDC) and some politicians’ promises that a vaccine would be available before the end of the year. People of color have expressed additional concerns with vaccine research.

    Judging from their written and verbal comments to the advisory committee, major vaccine manufacturers recognize the potential disruptions to subsequent clinical trials and are seeking the FDA’s advice to address them. While those considerations are daunting, I suspect that manufacturers and the FDA could create workable responses. But even then, the public trust issues associated with EUAs—which most of the public first heard about through the hydroxychloroquine debacle and again in the context of the convalescent plasma controversy—still make this tool a poor fit for vaccines.

    Instead, if vaccine trial data are promising enough to warrant giving some people pre-approval access to a covid-19 vaccine, the FDA should do so using a mechanism called “expanded access.” While the FDA ordinarily uses expanded access to make experimental treatments available to sick patients who have no alternative treatment available, it has been used for vaccines before and could be used now to avoid disrupting ongoing clinical trials or fostering public perceptions that a vaccine was being rushed because of an “emergency.” Expanded-access programs are also overseen by ethics committees and have informed consent requirements for patients that go beyond those associated with products authorized by EUA.

    Not only must the public trust a covid-19 vaccine enough to seek out the first wave of authorized vaccines, but that trust must be resilient enough to withstand potential setbacks: protection below 100% (and perhaps below 50%), significant side effects (or rumors of them), and possible recalls. That level of trust takes time to rebuild if it has been eroded. And the stakes here are not just the slowing of this pandemic. As former senior health official Andy Slavitt recently said, “Done right, vaccines end pandemics. Done wrong, pandemics end vaccines.”

    Clint Hermes, a former academic medical center general counsel, has advised universities, teaching hospitals, and life sciences companies on global health problems. He has helped set up vaccination, treatment, and surveillance projects for infectious diseases in North and South America, Africa, Asia, and the Middle East. The views expressed here are his own and not those of any organization with which he is affiliated, including his employer. The information presented here should not be construed as legal advice.



    from MIT Technology Review https://ift.tt/38NckP0
    via IFTTT

    How the pandemic readied Alibaba’s AI for the world’s biggest shopping day

    0

    The news: While the US has been hooked on its election, China has been shopping. From November 1 to 11, the country’s top e-commerce giants, Alibaba and JD, generated $115 billion in sales as part of their annual Single’s Day shopping bonanza. Alibaba, who started the festival in 2009, accounted for $74.1 billion of those sales, a 26% increase on last year. For comparison, Amazon’s 48-hour Prime Day sales only crossed the $10-billion mark this year.

    Pandemic stress test: The sheer scale of the event makes it somewhat of a logistical miracle. To pull off the feat, Alibaba and JD invest heavily in AI models and other technology infrastructure to predict shopping demand, optimize the global distribution of goods across warehouses, and streamline worldwide delivery. The systems are usually tested and refined throughout the year before being stretched to their limits during the actual event. This year, however, both companies faced a complication: accounting for changes in shopping behavior due to the pandemic.

    Broken models: In the initial weeks after the coronavirus outbreak, both companies saw their AI models behaving oddly. Because the pandemic struck during the Chinese New Year, hundreds of millions of people who would have otherwise been holiday shopping were instead buying lockdown necessities. The erratic behavior made it impossible to rely on historical data. “All of our forecasts were no longer accurate,” says Andrew Huang, general manager of the domestic supply chain at Cainiao, Alibaba’s logistics division.

    People were also buying things for different reasons, which was flying in the face of the platforms’ product recommendations. For example, JD’s algorithm assumed people who bought masks were sick and so recommended medicine, when it might have made more sense to recommend them hand sanitizer.

    Changing tack: The breakdown of their models forced both companies to get creative. Alibaba doubled down on its short-term forecasting strategy, says Huang. Rather than project shopping patterns based on season, for example, it refined its models to factor in more immediate variables like the previous week of sales leading up to major promotional events or external data like the number of covid cases in each province. As livestreaming e-commerce (showing off products in real time and answering questions from buyers) exploded in popularity during quarantine, the company also built a new forecasting model to project what happens when popular livestream influencers market different products.

    And JD retooled its algorithms to consider more external and real-time data signals, like covid case loads, news articles, and public sentiment on social media.

    Unexpected boon: Adding these new data sources into their models seems to have worked. Alibaba’s new livestreaming AI model, for example, ended up playing a core role in forecasting sales after the company made livestreaming a core part of its Single’s Day strategy. For JD, its updates may have also increased overall sales. The company says it saw a 3% increase in click-through rate on its product recommendations after it rolled out its improved algorithm, a pattern that held up during Single’s Day.

    Understanding context: Both companies have learned from the experience. For example, Huang says his team learned that each livestream influencer mobilizes its fan base to exhibit different purchasing behaviors, so it will continue to create bespoke prediction models for each of its top influencers. Meanwhile, JD says it has realized how much news and current events influence e-commerce patterns and will continue to tweak its product recommendation algorithm accordingly.



    from MIT Technology Review https://ift.tt/3km9dQs
    via IFTTT

    The news: While the US has been hooked on its election, China has been shopping. From November 1 to 11, the country’s top e-commerce giants, Alibaba and JD, generated $115 billion in sales as part of their annual Single’s Day shopping bonanza. Alibaba, who started the festival in 2009, accounted for $74.1 billion of those sales, a 26% increase on last year. For comparison, Amazon’s 48-hour Prime Day sales only crossed the $10-billion mark this year.

    Pandemic stress test: The sheer scale of the event makes it somewhat of a logistical miracle. To pull off the feat, Alibaba and JD invest heavily in AI models and other technology infrastructure to predict shopping demand, optimize the global distribution of goods across warehouses, and streamline worldwide delivery. The systems are usually tested and refined throughout the year before being stretched to their limits during the actual event. This year, however, both companies faced a complication: accounting for changes in shopping behavior due to the pandemic.

    Broken models: In the initial weeks after the coronavirus outbreak, both companies saw their AI models behaving oddly. Because the pandemic struck during the Chinese New Year, hundreds of millions of people who would have otherwise been holiday shopping were instead buying lockdown necessities. The erratic behavior made it impossible to rely on historical data. “All of our forecasts were no longer accurate,” says Andrew Huang, general manager of the domestic supply chain at Cainiao, Alibaba’s logistics division.

    People were also buying things for different reasons, which was flying in the face of the platforms’ product recommendations. For example, JD’s algorithm assumed people who bought masks were sick and so recommended medicine, when it might have made more sense to recommend them hand sanitizer.

    Changing tack: The breakdown of their models forced both companies to get creative. Alibaba doubled down on its short-term forecasting strategy, says Huang. Rather than project shopping patterns based on season, for example, it refined its models to factor in more immediate variables like the previous week of sales leading up to major promotional events or external data like the number of covid cases in each province. As livestreaming e-commerce (showing off products in real time and answering questions from buyers) exploded in popularity during quarantine, the company also built a new forecasting model to project what happens when popular livestream influencers market different products.

    And JD retooled its algorithms to consider more external and real-time data signals, like covid case loads, news articles, and public sentiment on social media.

    Unexpected boon: Adding these new data sources into their models seems to have worked. Alibaba’s new livestreaming AI model, for example, ended up playing a core role in forecasting sales after the company made livestreaming a core part of its Single’s Day strategy. For JD, its updates may have also increased overall sales. The company says it saw a 3% increase in click-through rate on its product recommendations after it rolled out its improved algorithm, a pattern that held up during Single’s Day.

    Understanding context: Both companies have learned from the experience. For example, Huang says his team learned that each livestream influencer mobilizes its fan base to exhibit different purchasing behaviors, so it will continue to create bespoke prediction models for each of its top influencers. Meanwhile, JD says it has realized how much news and current events influence e-commerce patterns and will continue to tweak its product recommendation algorithm accordingly.



    from MIT Technology Review https://ift.tt/3km9dQs
    via IFTTT

    AI is wrestling with a replication crisis

    0

    Last month Nature published a damning response written by 31 scientists to a study from Google Health that had appeared in the journal earlier this year. Google was describing successful trials of an AI that looked for signs of breast cancer in medical images. But according to its critics, the Google team provided so little information about its code and how it was tested that the study amounted to nothing more than a promotion of proprietary tech.

    “We couldn’t take it anymore,” says Benjamin Haibe-Kains, the lead author of the response, who studies computational genomics at the University of Toronto. “It’s not about this study in particular—it’s a trend we’ve been witnessing for multiple years now that has started to really bother us.”

    Haibe-Kains and his colleagues are among a growing number of scientists pushing back against a perceived lack of transparency in AI research. “When we saw that paper from Google, we realized that it was yet another example of a very high-profile journal publishing a very exciting study that has nothing to do with science,” he says. “It’s more an advertisement for cool technology. We can’t really do anything with it.”

    Science is built on a bedrock of trust, which typically involves sharing enough details about how research is carried out to enable others to replicate it, verifying results for themselves. This is how science self-corrects and weeds out results that don’t stand up. Replication also allows others to build on those results, helping to advance the field. Science that can’t be replicated falls by the wayside.

    At least, that’s the idea. In practice, few studies are fully replicated because most researchers are more interested in producing new results than reproducing old ones. But in fields like biology and physics—and computer science overall—researchers are typically expected to provide the information needed to rerun experiments, even if those reruns are rare.

    Ambitious noob

    AI is feeling the heat for several reasons. For a start, it is a newcomer. It has only really become an experimental science in the past decade, says Joelle Pineau, a computer scientist at Facebook AI Research and McGill University, who coauthored the complaint. “It used to be theoretical, but more and more we are running experiments,” she says. “And our dedication to sound methodology is lagging behind the ambition of our experiments.”

    The problem is not simply academic. A lack of transparency prevents new AI models and techniques from being properly assessed for robustness, bias, and safety. AI moves quickly from research labs to real-world applications, with direct impact on people’s lives. But machine-learning models that work well in the lab can fail in the wild—with potentially dangerous consequences. Replication by different researchers in different settings would expose problems sooner, making AI stronger for everyone. 

    AI already suffers from the black-box problem: it can be impossible to say exactly how or why a machine-learning model produces the results it does. A lack of transparency in research makes things worse. Large models need as many eyes on them as possible, more people testing them and figuring out what makes them tick. This is how we make AI in health care safer, AI in policing more fair, and chatbots less hateful.

    What’s stopping AI replication from happening as it should is a lack of access to three things: code, data, and hardware. According to the 2020 State of AI report, a well-vetted annual analysis of the field by investors Nathan Benaich and Ian Hogarth, only 15% of AI studies share their code. Industry researchers are bigger offenders than those affiliated with universities. In particular, the report calls out OpenAI and DeepMind for keeping code under wraps.

    Then there’s the growing gulf between the haves and have-nots when it comes to the two pillars of AI, data and hardware. Data is often proprietary, such as the information Facebook collects on its users, or sensitive, as in the case of personal medical records. And tech giants carry out more and more research on enormous, expensive clusters of computers that few universities or smaller companies have the resources to access.

    To take one example, training the language generator GPT-3 is estimated to have cost OpenAI $10 to $12 million—and that’s just the final model, not including the cost of developing and training its prototypes. “You could probably multiply that figure by at least one or two orders of magnitude,” says Benaich, who is founder of Air Street Capital, a VC firm that invests in AI startups. Only a tiny handful of big tech firms can afford to do that kind of work, he says: “Nobody else can just throw vast budgets at these experiments.”

    The rate of progress is dizzying, with thousands of papers published every year. But unless researchers know which ones to trust, it is hard for the field to move forward. Replication lets other researchers check that results have not been cherry-picked and that new AI techniques really do work as described. “It’s getting harder and harder to tell which are reliable results and which are not,” says Pineau.

    What can be done? Like many AI researchers, Pineau divides her time between university and corporate labs. For the last few years, she has been the driving force behind a change in how AI research is published. For example, last year she helped introduce a checklist of things that researchers must provide, including code and detailed descriptions of experiments, when they submit papers to NeurIPS, one of the biggest AI conferences.

    Replication is its own reward

    Pineau has also helped launch a handful of reproducibility challenges, in which researchers try to replicate the results of published studies. Participants select papers that have been accepted to a conference and compete to rerun the experiments using the information provided. But the only prize is kudos.

    This lack of incentive is a barrier to such efforts throughout the sciences, not just in AI. Replication is essential, but it isn’t rewarded. One solution is to get students to do the work. For the last couple of years, Rosemary Ke, a PhD student at Mila, a research institute in Montreal founded by Yoshua Bengio, has organized a reproducibility challenge where students try to replicate studies submitted to NeurIPS as part of their machine-learning course. In turn, some successful replications are peer-reviewed and published in the journal ReScience. 

    “It takes quite a lot of effort to reproduce another paper from scratch,” says Ke. “The reproducibility challenge recognizes this effort and gives credit to people who do a good job.” Ke and others are also spreading the word at AI conferences via workshops set up to encourage researchers to make their work more transparent. This year Pineau and Ke extended the reproducibility challenge to seven of the top AI conferences, including ICML and ICLR. 

    Another push for transparency is the Papers with Code project, set up by AI researcher Robert Stojnic when he was at the University of Cambridge. (Stojnic is now a colleague of Pineau’s at Facebook.) Launched as a stand-alone website where researchers could link a study to the code that went with it, this year Papers with Code started a collaboration with arXiv, a popular preprint server. Since October, all machine-learning papers on arXiv have come with a Papers with Code section that links directly to code that authors wish to make available. The aim is to make sharing the norm.

    Do such efforts make a difference? Pineau found that last year, when the checklist was introduced, the number of researchers including code with papers submitted to NeurIPS jumped from less than 50% to around 75%. Thousands of reviewers say they used the code to assess the submissions. And the number of participants in the reproducibility challenges is increasing.

    Sweating the details

    But it is only a start. Haibe-Kains points out that code alone is often not enough to rerun an experiment. Building AI models involves making many small changes—adding parameters here, adjusting values there. Any one of these can make the difference between a model working and not working. Without metadata describing how the models are trained and tuned, the code can be useless. “The devil really is in the detail,” he says.

    It’s also not always clear exactly what code to share in the first place. Many labs use special software to run their models; sometimes this is proprietary. It is hard to know how much of that support code needs to be shared as well, says Haibe-Kains.

    Pineau isn’t too worried about such obstacles. “We should have really high expectations for sharing code,” she says. Sharing data is trickier, but there are solutions here too. If researchers cannot share their data, they might give directions so that others can build similar data sets. Or you could have a process where a small number of independent auditors were given access to the data, verifying results for everybody else, says Haibe-Kains.

    Hardware is the biggest problem. But DeepMind claims that big-ticket research like AlphaGo or GPT-3 has a trickle-down effect, where money spent by rich labs eventually leads to results that benefit everyone. AI that is inaccessible to other researchers in its early stages, because it requires a lot of computing power, is often made more efficient—and thus more accessible—as it is developed. “AlphaGo Zero surpassed the original AlphaGo using far less computational resources,” says Koray Kavukcuoglu, vice president of research at DeepMind.

    In theory, this means that even if replication is delayed, at least it is still possible. Kavukcuoglu notes that Gian-Carlo Pascutto, a Belgian coder at Mozilla who writes chess and Go software in his free time, was able to re-create a version of AlphaGo Zero called Leela Zero, using algorithms outlined by DeepMind in its papers. Pineau also thinks that flagship research like AlphaGo and GPT-3 is rare. The majority of AI research is run on computers that are available to the average lab, she says. And the problem is not unique to AI. Pineau and Benaich both point to particle physics, where some experiments can only be done on expensive pieces of equipment such as the Large Hadron Collider.

    In physics, however, university labs run joint experiments on the LHC. Big AI experiments are typically carried out on hardware that is owned and controlled by companies. But even that is changing, says Pineau. For example, a group called Compute Canada is putting together computing clusters to let universities run large AI experiments. Some companies, including Facebook, also give universities limited access to their hardware. “It’s not completely there,” she says. “But some doors are opening.”

    Haibe-Kains is less convinced. When he asked the Google Health team to share the code for its cancer-screening AI, he was told that it needed more testing. The team repeats this justification in a formal reply to Haibe-Kains’s criticisms, also published in Nature: “We intend to subject our software to extensive testing before its use in a clinical environment, working alongside patients, providers and regulators to ensure efficacy and safety.” The researchers also said they did not have permission to share all the medical data they were using.

    It’s not good enough, says Haibe-Kains: “If they want to build a product out of it, then I completely understand they won’t disclose all the information.” But he thinks that if you publish in a scientific journal or conference, you have a duty to release code that others can run. Sometimes that might mean sharing a version that is trained on less data or uses less expensive hardware. It might give worse results, but people will be able to tinker with it. “The boundaries between building a product versus doing research are getting fuzzier by the minute,” says Haibe-Kains. “I think as a field we are going to lose.” 

    Research habits die hard

    If companies are going to be criticized for publishing, why do it at all? There’s a degree of public relations, of course. But the main reason is that the best corporate labs are filled with researchers from universities. To some extent the culture at places like Facebook AI Research, DeepMind, and OpenAI is shaped by traditional academic habits. Tech companies also win by participating in the wider research community. All big AI projects at private labs are built on layers and layers of public research. And few AI researchers haven’t made use of open-source machine-learning tools like Facebook’s PyTorch or Google’s TensorFlow.

    As more research is done in house at giant tech companies, certain trade-offs between the competing demands of business and research will become inevitable. The question is how researchers navigate them. Haibe-Kains would like to see journals like Nature split what they publish into separate streams: reproducible studies on one hand and tech showcases on the other.

    But Pineau is more optimistic. “I would not be working at Facebook if it did not have an open approach to research,” she says. 

    Other large corporate labs stress their commitment to transparency too. “Scientific work requires scrutiny and replication by others in the field,” says Kavukcuoglu. “This is a critical part of our approach to research at DeepMind.”

    “OpenAI has grown into something very different from a traditional laboratory,” says Kayla Wood, a spokesperson for the company. “Naturally that raises some questions.” She notes that OpenAI works with more than 80 industry and academic organizations in the Partnership on AI to think about long-term publication norms for research.

    Pineau believes there’s something to that. She thinks AI companies are demonstrating a third way to do research, somewhere between Haibe-Kains’s two streams. She contrasts the intellectual output of private AI labs with that of pharmaceutical companies, for example, which invest billions in drugs and keep much of the work behind closed doors.

    The long-term impact of the practices introduced by Pineau and others remains to be seen. Will habits be changed for good? What difference will it make to AI’s uptake outside research? A lot hangs on the direction AI takes. The trend for ever larger models and data sets—favored by OpenAI, for example—will continue to make the cutting edge of AI inaccessible to most researchers. On the other hand, new techniques, such as model compression and few-shot learning, could reverse this trend and allow more researchers to work with smaller, more efficient AI.

    Either way, AI research will still be dominated by large companies. If it’s done right, that doesn’t have to be a bad thing, says Pineau: “AI is changing the conversation about how industry research labs operate.” The key will be making sure the wider field gets the chance to participate. Because the trustworthiness of AI, on which so much depends, begins at the cutting edge. 



    from MIT Technology Review https://ift.tt/2IwYzt7
    via IFTTT

    Last month Nature published a damning response written by 31 scientists to a study from Google Health that had appeared in the journal earlier this year. Google was describing successful trials of an AI that looked for signs of breast cancer in medical images. But according to its critics, the Google team provided so little information about its code and how it was tested that the study amounted to nothing more than a promotion of proprietary tech.

    “We couldn’t take it anymore,” says Benjamin Haibe-Kains, the lead author of the response, who studies computational genomics at the University of Toronto. “It’s not about this study in particular—it’s a trend we’ve been witnessing for multiple years now that has started to really bother us.”

    Haibe-Kains and his colleagues are among a growing number of scientists pushing back against a perceived lack of transparency in AI research. “When we saw that paper from Google, we realized that it was yet another example of a very high-profile journal publishing a very exciting study that has nothing to do with science,” he says. “It’s more an advertisement for cool technology. We can’t really do anything with it.”

    Science is built on a bedrock of trust, which typically involves sharing enough details about how research is carried out to enable others to replicate it, verifying results for themselves. This is how science self-corrects and weeds out results that don’t stand up. Replication also allows others to build on those results, helping to advance the field. Science that can’t be replicated falls by the wayside.

    At least, that’s the idea. In practice, few studies are fully replicated because most researchers are more interested in producing new results than reproducing old ones. But in fields like biology and physics—and computer science overall—researchers are typically expected to provide the information needed to rerun experiments, even if those reruns are rare.

    Ambitious noob

    AI is feeling the heat for several reasons. For a start, it is a newcomer. It has only really become an experimental science in the past decade, says Joelle Pineau, a computer scientist at Facebook AI Research and McGill University, who coauthored the complaint. “It used to be theoretical, but more and more we are running experiments,” she says. “And our dedication to sound methodology is lagging behind the ambition of our experiments.”

    The problem is not simply academic. A lack of transparency prevents new AI models and techniques from being properly assessed for robustness, bias, and safety. AI moves quickly from research labs to real-world applications, with direct impact on people’s lives. But machine-learning models that work well in the lab can fail in the wild—with potentially dangerous consequences. Replication by different researchers in different settings would expose problems sooner, making AI stronger for everyone. 

    AI already suffers from the black-box problem: it can be impossible to say exactly how or why a machine-learning model produces the results it does. A lack of transparency in research makes things worse. Large models need as many eyes on them as possible, more people testing them and figuring out what makes them tick. This is how we make AI in health care safer, AI in policing more fair, and chatbots less hateful.

    What’s stopping AI replication from happening as it should is a lack of access to three things: code, data, and hardware. According to the 2020 State of AI report, a well-vetted annual analysis of the field by investors Nathan Benaich and Ian Hogarth, only 15% of AI studies share their code. Industry researchers are bigger offenders than those affiliated with universities. In particular, the report calls out OpenAI and DeepMind for keeping code under wraps.

    Then there’s the growing gulf between the haves and have-nots when it comes to the two pillars of AI, data and hardware. Data is often proprietary, such as the information Facebook collects on its users, or sensitive, as in the case of personal medical records. And tech giants carry out more and more research on enormous, expensive clusters of computers that few universities or smaller companies have the resources to access.

    To take one example, training the language generator GPT-3 is estimated to have cost OpenAI $10 to $12 million—and that’s just the final model, not including the cost of developing and training its prototypes. “You could probably multiply that figure by at least one or two orders of magnitude,” says Benaich, who is founder of Air Street Capital, a VC firm that invests in AI startups. Only a tiny handful of big tech firms can afford to do that kind of work, he says: “Nobody else can just throw vast budgets at these experiments.”

    The rate of progress is dizzying, with thousands of papers published every year. But unless researchers know which ones to trust, it is hard for the field to move forward. Replication lets other researchers check that results have not been cherry-picked and that new AI techniques really do work as described. “It’s getting harder and harder to tell which are reliable results and which are not,” says Pineau.

    What can be done? Like many AI researchers, Pineau divides her time between university and corporate labs. For the last few years, she has been the driving force behind a change in how AI research is published. For example, last year she helped introduce a checklist of things that researchers must provide, including code and detailed descriptions of experiments, when they submit papers to NeurIPS, one of the biggest AI conferences.

    Replication is its own reward

    Pineau has also helped launch a handful of reproducibility challenges, in which researchers try to replicate the results of published studies. Participants select papers that have been accepted to a conference and compete to rerun the experiments using the information provided. But the only prize is kudos.

    This lack of incentive is a barrier to such efforts throughout the sciences, not just in AI. Replication is essential, but it isn’t rewarded. One solution is to get students to do the work. For the last couple of years, Rosemary Ke, a PhD student at Mila, a research institute in Montreal founded by Yoshua Bengio, has organized a reproducibility challenge where students try to replicate studies submitted to NeurIPS as part of their machine-learning course. In turn, some successful replications are peer-reviewed and published in the journal ReScience. 

    “It takes quite a lot of effort to reproduce another paper from scratch,” says Ke. “The reproducibility challenge recognizes this effort and gives credit to people who do a good job.” Ke and others are also spreading the word at AI conferences via workshops set up to encourage researchers to make their work more transparent. This year Pineau and Ke extended the reproducibility challenge to seven of the top AI conferences, including ICML and ICLR. 

    Another push for transparency is the Papers with Code project, set up by AI researcher Robert Stojnic when he was at the University of Cambridge. (Stojnic is now a colleague of Pineau’s at Facebook.) Launched as a stand-alone website where researchers could link a study to the code that went with it, this year Papers with Code started a collaboration with arXiv, a popular preprint server. Since October, all machine-learning papers on arXiv have come with a Papers with Code section that links directly to code that authors wish to make available. The aim is to make sharing the norm.

    Do such efforts make a difference? Pineau found that last year, when the checklist was introduced, the number of researchers including code with papers submitted to NeurIPS jumped from less than 50% to around 75%. Thousands of reviewers say they used the code to assess the submissions. And the number of participants in the reproducibility challenges is increasing.

    Sweating the details

    But it is only a start. Haibe-Kains points out that code alone is often not enough to rerun an experiment. Building AI models involves making many small changes—adding parameters here, adjusting values there. Any one of these can make the difference between a model working and not working. Without metadata describing how the models are trained and tuned, the code can be useless. “The devil really is in the detail,” he says.

    It’s also not always clear exactly what code to share in the first place. Many labs use special software to run their models; sometimes this is proprietary. It is hard to know how much of that support code needs to be shared as well, says Haibe-Kains.

    Pineau isn’t too worried about such obstacles. “We should have really high expectations for sharing code,” she says. Sharing data is trickier, but there are solutions here too. If researchers cannot share their data, they might give directions so that others can build similar data sets. Or you could have a process where a small number of independent auditors were given access to the data, verifying results for everybody else, says Haibe-Kains.

    Hardware is the biggest problem. But DeepMind claims that big-ticket research like AlphaGo or GPT-3 has a trickle-down effect, where money spent by rich labs eventually leads to results that benefit everyone. AI that is inaccessible to other researchers in its early stages, because it requires a lot of computing power, is often made more efficient—and thus more accessible—as it is developed. “AlphaGo Zero surpassed the original AlphaGo using far less computational resources,” says Koray Kavukcuoglu, vice president of research at DeepMind.

    In theory, this means that even if replication is delayed, at least it is still possible. Kavukcuoglu notes that Gian-Carlo Pascutto, a Belgian coder at Mozilla who writes chess and Go software in his free time, was able to re-create a version of AlphaGo Zero called Leela Zero, using algorithms outlined by DeepMind in its papers. Pineau also thinks that flagship research like AlphaGo and GPT-3 is rare. The majority of AI research is run on computers that are available to the average lab, she says. And the problem is not unique to AI. Pineau and Benaich both point to particle physics, where some experiments can only be done on expensive pieces of equipment such as the Large Hadron Collider.

    In physics, however, university labs run joint experiments on the LHC. Big AI experiments are typically carried out on hardware that is owned and controlled by companies. But even that is changing, says Pineau. For example, a group called Compute Canada is putting together computing clusters to let universities run large AI experiments. Some companies, including Facebook, also give universities limited access to their hardware. “It’s not completely there,” she says. “But some doors are opening.”

    Haibe-Kains is less convinced. When he asked the Google Health team to share the code for its cancer-screening AI, he was told that it needed more testing. The team repeats this justification in a formal reply to Haibe-Kains’s criticisms, also published in Nature: “We intend to subject our software to extensive testing before its use in a clinical environment, working alongside patients, providers and regulators to ensure efficacy and safety.” The researchers also said they did not have permission to share all the medical data they were using.

    It’s not good enough, says Haibe-Kains: “If they want to build a product out of it, then I completely understand they won’t disclose all the information.” But he thinks that if you publish in a scientific journal or conference, you have a duty to release code that others can run. Sometimes that might mean sharing a version that is trained on less data or uses less expensive hardware. It might give worse results, but people will be able to tinker with it. “The boundaries between building a product versus doing research are getting fuzzier by the minute,” says Haibe-Kains. “I think as a field we are going to lose.” 

    Research habits die hard

    If companies are going to be criticized for publishing, why do it at all? There’s a degree of public relations, of course. But the main reason is that the best corporate labs are filled with researchers from universities. To some extent the culture at places like Facebook AI Research, DeepMind, and OpenAI is shaped by traditional academic habits. Tech companies also win by participating in the wider research community. All big AI projects at private labs are built on layers and layers of public research. And few AI researchers haven’t made use of open-source machine-learning tools like Facebook’s PyTorch or Google’s TensorFlow.

    As more research is done in house at giant tech companies, certain trade-offs between the competing demands of business and research will become inevitable. The question is how researchers navigate them. Haibe-Kains would like to see journals like Nature split what they publish into separate streams: reproducible studies on one hand and tech showcases on the other.

    But Pineau is more optimistic. “I would not be working at Facebook if it did not have an open approach to research,” she says. 

    Other large corporate labs stress their commitment to transparency too. “Scientific work requires scrutiny and replication by others in the field,” says Kavukcuoglu. “This is a critical part of our approach to research at DeepMind.”

    “OpenAI has grown into something very different from a traditional laboratory,” says Kayla Wood, a spokesperson for the company. “Naturally that raises some questions.” She notes that OpenAI works with more than 80 industry and academic organizations in the Partnership on AI to think about long-term publication norms for research.

    Pineau believes there’s something to that. She thinks AI companies are demonstrating a third way to do research, somewhere between Haibe-Kains’s two streams. She contrasts the intellectual output of private AI labs with that of pharmaceutical companies, for example, which invest billions in drugs and keep much of the work behind closed doors.

    The long-term impact of the practices introduced by Pineau and others remains to be seen. Will habits be changed for good? What difference will it make to AI’s uptake outside research? A lot hangs on the direction AI takes. The trend for ever larger models and data sets—favored by OpenAI, for example—will continue to make the cutting edge of AI inaccessible to most researchers. On the other hand, new techniques, such as model compression and few-shot learning, could reverse this trend and allow more researchers to work with smaller, more efficient AI.

    Either way, AI research will still be dominated by large companies. If it’s done right, that doesn’t have to be a bad thing, says Pineau: “AI is changing the conversation about how industry research labs operate.” The key will be making sure the wider field gets the chance to participate. Because the trustworthiness of AI, on which so much depends, begins at the cutting edge. 



    from MIT Technology Review https://ift.tt/2IwYzt7
    via IFTTT

    What Biden means for Big Tech—and Google in particular

    0

    Throughout his campaign to win the White House, president-elect Joe Biden has been relatively quiet about the technology industry. 

    In a revealing January 2020 interview with The New York Times editorial board, Biden said that he wanted to revoke Section 230; suggested that he disagreed with how friendly the Obama administration became with Silicon Valley; and referred to tech executives as “little creeps” who displayed an “overwhelming arrogance.” But internet companies have also been one of his campaign’s top 10 donors, technology industry insiders joined his campaign, and incoming vice president Kamala Harris has long-standing ties to Silicon Valley as the former district attorney in San Francisco. 

    Aside from expanding Broadband access, and its role in climate change and the coronavirus response, however, technology may not be high on Biden’s list of priorities, says Gigi Sohn, who served as counselor to Federal Communications Commission chairman Tom Wheeler during the Obama Administration.

    Biden suggested that he disagreed with how friendly the Obama administration became with Silicon Valley… but internet companies have been one of his campaign’s top donors.

    She says he’s going to inherit other major issues that will—and should—take up his administration’s early focus. “We could talk about the evils of the internet, but you still need it,” she says. “I think making sure that every American has access to affordable broadband is more important [than regulating the Internet], because they need that to live right now…to work…to learn…and to see a doctor.”

    On Sunday morning, less than 24 hours after the first network called the presidential election for Joe Biden, the president-elect had published a transition website detailing his administration’s agenda. It had four priority areas: covid-19, economic recovery, racial equity, and climate change. Technology was mentioned briefly, but with a focus on expanding broadband internet, rather than regulation of Big Tech companies. 

    So what will tech regulation look like under a Biden presidency? It’s not clear, but there are several areas worth paying attention to. 

    The Google lawsuit will continue

    In late October, the Department of Justice filed its long-awaited antitrust lawsuit against Google. While experts are divided on the strength of the lawsuit itself, they agree that it will continue under a Biden presidency. If anything, some argue that it will likely be strengthened, especially with several states including New York expected to file their own lawsuit, which may be combined with the DoJ’s effort.

    Additionally, the Biden administration has “the ability to amend that complaint,” says Charlotte Slaiman, the director of competition policy at the advocacy organization Public Knowledge. “There are actually more competition concerns around Google that could be included in a broader complaint,” she says, including potential anti-competitive practices in display advertising. 

    Meanwhile, Andrew Sullivan, the President & CEO of the Internet Society, says that he is “hopeful” that a Biden presidency would mean “fewer attempts to interfere in the direct operation of the internet.” This did not mean a repudiation of antitrust regulation, he added. “There are many Democrats who would like those companies to be broken up too, so we might not see a big change in policy.” 

    Refocusing the debate on Section 230 

    Biden has spoken out about the need to revoke Section 230, the section of the Communications Decency Act which shields internet companies from liability for the content that they host. 

    Sohn says that an actual Biden stance is more nuanced, and that while Section 230 will continue to be an area of debate, regulators are likely to drop the enforcement actions it has proposed under Trump. “I can assure you that you know nobody in his leadership thinks the FCC ought to be the one interpreting the law.” Rather, she says, it falls to Congress to “fix the law” and to courts to interpret it.

    On the other hand, Sohn believes that a Biden presidency will refocus the argument: rather than be driven by the Republican-led discourse on anti-conservative bias at social media companies, for which there is no evidence, the conversation will shift to how “these companies are too big and too powerful.”

    This was reflected in a series of tweets by Biden campaign deputy communications director Bill Russo, who said Facebook’s inability to deal with misinformation was “shredding the fabric of our democracy.”

    Different priorities 

    If Republicans hold the Senate, or if the Democrats hold only a narrow majority there, “tech antitrust falls too far down the list of priorities,” says Alec Stapp, the technology director for the Progressive Policy Institute. In particular, he says, the need to create a coronavirus plan and stimulus package will be the primary focus. 

    Over the summer, House Democrats published a 449-page report on the monopolistic practices of Apple, Amazon, Facebook, and Google. Charlotte Slaiman, the competition policy director at Washington DC-based advocacy organization Public Knowledge, calls it “a really big deal” and indicative, perhaps, of legislation to come. 

    Evan Greer, the deputy policy director of civil rights advocacy organization Fight for the Future, says that there already is a “generalized anger and anxiety about Big Tech companies’ abuses,” but that more policy is needed that can “attack the problem at its root.” This means not only breaking up monopolies, “but also ban[ning] harmful surveillance capitalist business models.” 

    According to some experts, like Sohn, this can be achieved through a national consumer privacy and data protection bill, similar to California’s Consumer Privacy Act—which was expanded in the state’s recent elections. “One of the things that make these companies so powerful is the fact that they have access to all our data,” she says. Limiting their access to data would effectively constrain that power and this, she says, would be her top priority in tech regulation. In fact, she says it is already being discussed.

    Bringing tech back into the tent?

    The Obama administration had an infamously cozy relationship with Silicon Valley, and indications from Biden’s campaign suggest that those same relationships have helped his efforts to get elected. 

    Biden launched his presidential bid at a fundraiser hosted by Comcast executive David Cohen  in April 2019, and raised over $25 million from internet companies, according to data from the Center for Responsive Politics, which tracks campaign finance. A number of Silicon Valley insiders joined his team, including a former government affairs executive at Apple, Cynthia C. Hogan, who served as one of four co-chairs of his vice presidential selection committee.

    How these political donations and individual moves will impact the administration’s approach to Big Tech is still speculative, but the revolving door between politics and Silicon Valley has been well documented. 

    The eponymous Revolving Door Project, a non-profit that tracks moves between industry and government, noted that 55 employees from Google alone joined the Obama administration in influential positions, while 197 former Obama officials joined Google after their time working for the government was over. 



    from MIT Technology Review https://ift.tt/2UpzbIp
    via IFTTT

    Throughout his campaign to win the White House, president-elect Joe Biden has been relatively quiet about the technology industry. 

    In a revealing January 2020 interview with The New York Times editorial board, Biden said that he wanted to revoke Section 230; suggested that he disagreed with how friendly the Obama administration became with Silicon Valley; and referred to tech executives as “little creeps” who displayed an “overwhelming arrogance.” But internet companies have also been one of his campaign’s top 10 donors, technology industry insiders joined his campaign, and incoming vice president Kamala Harris has long-standing ties to Silicon Valley as the former district attorney in San Francisco. 

    Aside from expanding Broadband access, and its role in climate change and the coronavirus response, however, technology may not be high on Biden’s list of priorities, says Gigi Sohn, who served as counselor to Federal Communications Commission chairman Tom Wheeler during the Obama Administration.

    Biden suggested that he disagreed with how friendly the Obama administration became with Silicon Valley… but internet companies have been one of his campaign’s top donors.

    She says he’s going to inherit other major issues that will—and should—take up his administration’s early focus. “We could talk about the evils of the internet, but you still need it,” she says. “I think making sure that every American has access to affordable broadband is more important [than regulating the Internet], because they need that to live right now…to work…to learn…and to see a doctor.”

    On Sunday morning, less than 24 hours after the first network called the presidential election for Joe Biden, the president-elect had published a transition website detailing his administration’s agenda. It had four priority areas: covid-19, economic recovery, racial equity, and climate change. Technology was mentioned briefly, but with a focus on expanding broadband internet, rather than regulation of Big Tech companies. 

    So what will tech regulation look like under a Biden presidency? It’s not clear, but there are several areas worth paying attention to. 

    The Google lawsuit will continue

    In late October, the Department of Justice filed its long-awaited antitrust lawsuit against Google. While experts are divided on the strength of the lawsuit itself, they agree that it will continue under a Biden presidency. If anything, some argue that it will likely be strengthened, especially with several states including New York expected to file their own lawsuit, which may be combined with the DoJ’s effort.

    Additionally, the Biden administration has “the ability to amend that complaint,” says Charlotte Slaiman, the director of competition policy at the advocacy organization Public Knowledge. “There are actually more competition concerns around Google that could be included in a broader complaint,” she says, including potential anti-competitive practices in display advertising. 

    Meanwhile, Andrew Sullivan, the President & CEO of the Internet Society, says that he is “hopeful” that a Biden presidency would mean “fewer attempts to interfere in the direct operation of the internet.” This did not mean a repudiation of antitrust regulation, he added. “There are many Democrats who would like those companies to be broken up too, so we might not see a big change in policy.” 

    Refocusing the debate on Section 230 

    Biden has spoken out about the need to revoke Section 230, the section of the Communications Decency Act which shields internet companies from liability for the content that they host. 

    Sohn says that an actual Biden stance is more nuanced, and that while Section 230 will continue to be an area of debate, regulators are likely to drop the enforcement actions it has proposed under Trump. “I can assure you that you know nobody in his leadership thinks the FCC ought to be the one interpreting the law.” Rather, she says, it falls to Congress to “fix the law” and to courts to interpret it.

    On the other hand, Sohn believes that a Biden presidency will refocus the argument: rather than be driven by the Republican-led discourse on anti-conservative bias at social media companies, for which there is no evidence, the conversation will shift to how “these companies are too big and too powerful.”

    This was reflected in a series of tweets by Biden campaign deputy communications director Bill Russo, who said Facebook’s inability to deal with misinformation was “shredding the fabric of our democracy.”

    Different priorities 

    If Republicans hold the Senate, or if the Democrats hold only a narrow majority there, “tech antitrust falls too far down the list of priorities,” says Alec Stapp, the technology director for the Progressive Policy Institute. In particular, he says, the need to create a coronavirus plan and stimulus package will be the primary focus. 

    Over the summer, House Democrats published a 449-page report on the monopolistic practices of Apple, Amazon, Facebook, and Google. Charlotte Slaiman, the competition policy director at Washington DC-based advocacy organization Public Knowledge, calls it “a really big deal” and indicative, perhaps, of legislation to come. 

    Evan Greer, the deputy policy director of civil rights advocacy organization Fight for the Future, says that there already is a “generalized anger and anxiety about Big Tech companies’ abuses,” but that more policy is needed that can “attack the problem at its root.” This means not only breaking up monopolies, “but also ban[ning] harmful surveillance capitalist business models.” 

    According to some experts, like Sohn, this can be achieved through a national consumer privacy and data protection bill, similar to California’s Consumer Privacy Act—which was expanded in the state’s recent elections. “One of the things that make these companies so powerful is the fact that they have access to all our data,” she says. Limiting their access to data would effectively constrain that power and this, she says, would be her top priority in tech regulation. In fact, she says it is already being discussed.

    Bringing tech back into the tent?

    The Obama administration had an infamously cozy relationship with Silicon Valley, and indications from Biden’s campaign suggest that those same relationships have helped his efforts to get elected. 

    Biden launched his presidential bid at a fundraiser hosted by Comcast executive David Cohen  in April 2019, and raised over $25 million from internet companies, according to data from the Center for Responsive Politics, which tracks campaign finance. A number of Silicon Valley insiders joined his team, including a former government affairs executive at Apple, Cynthia C. Hogan, who served as one of four co-chairs of his vice presidential selection committee.

    How these political donations and individual moves will impact the administration’s approach to Big Tech is still speculative, but the revolving door between politics and Silicon Valley has been well documented. 

    The eponymous Revolving Door Project, a non-profit that tracks moves between industry and government, noted that 55 employees from Google alone joined the Obama administration in influential positions, while 197 former Obama officials joined Google after their time working for the government was over. 



    from MIT Technology Review https://ift.tt/2UpzbIp
    via IFTTT

    Biden has unveiled his covid-19 task force

    0

    The news: President-elect Joe Biden and Vice President–elect Kamala Harris have revealed the members of their covid-19 task force. Its 10 members are mostly former government health officials, top medical figures, and academics. The task force will have three cochairs: David Kessler, who ran the Food and Drug Administration under Presidents George H.W. Bush and Bill Clinton; Vivek H. Murthy, surgeon general during the Obama presidency; and Marcella Nunez-Smith, associate dean for health equity research at the Yale School of Medicine.

    Murthy and Kessler have both been involved in Biden’s pandemic preparations for months. Other members include Rick Bright, an immunologist and vaccine expert who was ousted from the Trump administration after filing a whistleblower complaint alleging that his coronavirus concerns were ignored, and Atul Gawande, a professor of surgery and health policy at Harvard University. In a statement released today, Biden said: “Dealing with the coronavirus pandemic is one of the most important battles our administration will face, and I will be informed by science and by experts. The advisory board will help shape my approach to managing the surge in reported infections; ensuring vaccines are safe, effective, and distributed efficiently, equitably, and free; and protecting at-risk populations.”

    What will the task force do? For now, its job will be to guide Biden’s policies and preparations as he plans to take office on January 20. The ultimate aim is to enact the policies Biden promised on the campaign trail—a seven-point plan that aims to fix testing and tracing, ramp up production of personal protective equipment, provide clear national guidance, and ensure the “effective, equitable” distribution of treatments and vaccines. He also plans to implement a national mask mandate, do more to shield older and higher-risk adults, and rebuild the institutions in charge of pandemic preparation and defense, to help ward off the next threat.

    The significance: To say the task force has its work cut out is a huge understatement. The coronavirus crisis continues to escalate rapidly in the US. New cases topped 100,000 a day for the fifth day in a row on Sunday, as the total number of positive tests reached 10 million and the death toll passed 237,000. Biden has said that his number one priority as president will be taming the pandemic. His decision to appoint this task force as one of his first actions since declaring victory underlines the fact that it tops his agenda, contrasting sharply with the current lack of coordinated federal response.



    from MIT Technology Review https://ift.tt/36nnnvw
    via IFTTT

    The news: President-elect Joe Biden and Vice President–elect Kamala Harris have revealed the members of their covid-19 task force. Its 10 members are mostly former government health officials, top medical figures, and academics. The task force will have three cochairs: David Kessler, who ran the Food and Drug Administration under Presidents George H.W. Bush and Bill Clinton; Vivek H. Murthy, surgeon general during the Obama presidency; and Marcella Nunez-Smith, associate dean for health equity research at the Yale School of Medicine.

    Murthy and Kessler have both been involved in Biden’s pandemic preparations for months. Other members include Rick Bright, an immunologist and vaccine expert who was ousted from the Trump administration after filing a whistleblower complaint alleging that his coronavirus concerns were ignored, and Atul Gawande, a professor of surgery and health policy at Harvard University. In a statement released today, Biden said: “Dealing with the coronavirus pandemic is one of the most important battles our administration will face, and I will be informed by science and by experts. The advisory board will help shape my approach to managing the surge in reported infections; ensuring vaccines are safe, effective, and distributed efficiently, equitably, and free; and protecting at-risk populations.”

    What will the task force do? For now, its job will be to guide Biden’s policies and preparations as he plans to take office on January 20. The ultimate aim is to enact the policies Biden promised on the campaign trail—a seven-point plan that aims to fix testing and tracing, ramp up production of personal protective equipment, provide clear national guidance, and ensure the “effective, equitable” distribution of treatments and vaccines. He also plans to implement a national mask mandate, do more to shield older and higher-risk adults, and rebuild the institutions in charge of pandemic preparation and defense, to help ward off the next threat.

    The significance: To say the task force has its work cut out is a huge understatement. The coronavirus crisis continues to escalate rapidly in the US. New cases topped 100,000 a day for the fifth day in a row on Sunday, as the total number of positive tests reached 10 million and the death toll passed 237,000. Biden has said that his number one priority as president will be taming the pandemic. His decision to appoint this task force as one of his first actions since declaring victory underlines the fact that it tops his agenda, contrasting sharply with the current lack of coordinated federal response.



    from MIT Technology Review https://ift.tt/36nnnvw
    via IFTTT

    Half the Milky Way’s sun-like stars could be home to Earth-like planets

    0

    Nearly 4,300 exoplanets have been discovered by astronomers, and it’s quite obvious now our galaxy is filled with them. But the point of looking for these new worlds is more than just an exercise in stamp collecting—it’s to find one that could be home to life, be it future humans who have found a way to travel those distances or extraterrestrial life that’s made a home for itself already. The best opportunity to find something like that is to find a planet that resembles Earth.

    And what better way to look for Earth 2.0 than to search around stars similar to the sun? A new analysis of exoplanet data collected by NASA’s Kepler space telescope, which operated from 2009 to 2018, has come up with some new predictions for how many stars in the Milky Way galaxy that are comparable to the sun in temperature and age are likely to be orbited by a rocky, potentially habitable planet like Earth. When applied to current estimates of 4.1 billion sun-like stars in the galaxy, their model suggests there are at minimum 300 million with at least one habitable planet. 

    The model’s average, however, posits that one in two sun-like stars could have a habitable planet, causing that figure to swell to over 2 billion. Even less conservative predictions suggest it could be over 3.6 billion.

    The new study has not yet been peer-reviewed, but it will be soon, and it is due to be published in the Astronomical Journal.

    “This appears to be a very careful study and deals with really thorny issues about extrapolating from the Kepler catalogue,” says Adam Frank, a physicist and astronomer at the University of Rochester, who was not involved with the study. “The goal is to get a complete, reliable, and accurate estimate for the average number of potentially habitable planets around stars. They seem to have made a good run at that.”

    Scientists have made several attempts in the past to use Kepler data to work out how many sun-like stars in the galaxy have potentially habitable exoplanets in their orbit. But these studies have provided answers that ranged from less than 1% to more than 100% (i.e., multiple planets around these stars). It’s a reflection of how hard it’s been to work with this data, says Steve Bryson of NASA Ames Research Center in California, who led the new work.

    Two major issues have created this large window: incomplete data, and the need to cull false detections from the Kepler data set.

    The new study addresses both of these problems. It’s the first of its kind to use the full Kepler exoplanet data set (more than 4,000 detections from 150,000 stars), but it’s also using stellar data from Gaia, the European Space Agency’s mission to map every star in the Milky Way. All that helped make the final estimates more accurate, with smaller uncertainties. And this is after scientists have spent years analyzing the Kepler catalogue to strip away obscuring elements and ensure that only real exoplanets are left. Armed with both Kepler and Gaia data, Bryson and his team were able to determine the rate of formation for sun-like stars in the galaxy, the number of stars likely to have rocky planets (with radiuses 0.5 to 1.5 times Earth’s), and the likelihood those planets would be habitable.

    On average, Bryson and his team predict, 37 to 60% of sun-like stars in the Milky Way should be home to at least one potentially habitable planet. Optimistically, the figure could be as high as 88%. The conservative calculations pull this figure down to 7% of sun-like stars in the galaxy (hence 300 million)—and on the basis of that number, the team predicts there are four sun-like stars with habitable planets within 30 light-years of Earth. 

    “One of the original goals of the Kepler mission was to compute exactly this number,” says Bryson. “We have always intended to do this.” 

    Habitability has to do with the chances a planet has temperatures moderate enough for liquid water to exist on the surface (since water is essential for life as we know it). Most studies figure this out by gauging the distance of an exoplanet from its host star and whether its orbit is not too close and not too far—the so-called Goldilocks zone.

    According to Bryson, orbital distance is a useful metric when you’re examining one specific star. But when you’re looking at many stars, they’ll all exhibit different brightnesses that deliver different amounts of heat to surrounding objects, which means their habitable zones will vary. The team instead chose to think about habitability in terms of the volume of light hitting the surface of an exoplanet, which the paper calls the “instellation flux.” 

    Through stellar brightness data, “we are measuring the true temperature of the planet—whether or not it is truly in the habitable zone—for all the planets around all the stars in our sample,” says Bryson. You don’t get the same sort of reliable temperature figures working with distances, he says. 

    Though Bryson claims this study’s uncertainties are smaller than those in previous efforts, they are still quite large. This is mainly because the team is working with such a small sample of discovered rocky exoplanets. Kepler has identified over 2,800 exoplanets, only some of which orbit sun-like stars. It’s not an ideal number to use to predict the existence of hundreds of millions of others in the galaxy. “By having so few observations, it limits what you can say about what the truth is,” says Bryson.

    Lastly, the new study assumes a simple model for these exoplanets that could depart dramatically from conditions in the real world (some of these stars may form  binary star systems with other stars, for example). Plugging more variables into the model would help paint a more accurate picture, but that requires more precise data that we don’t really have yet. 

    But it’s studies like these that could help us acquire that data. The whole point of Kepler was to help scientists figure out what kinds of interstellar objects they ought to devote more resources to studying to find extraterrestrial life, especially with space-based telescopes whose observation time is limited. These are the instruments (such as NASA’s James Webb Space Telescope and the ESA’s PLATO telescope) that could determine whether a potentially habitable exoplanet has an atmosphere or is home to any potential biosignatures, and studies like this latest one can help engineers design telescopes more suited to these tasks. 

    “Almost every sun-like star in the galaxy has a planet where life could form,” says Frank. “Humanity has been asking this question for more than 2,500 years, and now we not only know the answer, we are refining our knowledge of that answer. This paper tells us there are a lot of planets out there in the right place for life to form.”



    from MIT Technology Review https://ift.tt/3nbTOnN
    via IFTTT

    Nearly 4,300 exoplanets have been discovered by astronomers, and it’s quite obvious now our galaxy is filled with them. But the point of looking for these new worlds is more than just an exercise in stamp collecting—it’s to find one that could be home to life, be it future humans who have found a way to travel those distances or extraterrestrial life that’s made a home for itself already. The best opportunity to find something like that is to find a planet that resembles Earth.

    And what better way to look for Earth 2.0 than to search around stars similar to the sun? A new analysis of exoplanet data collected by NASA’s Kepler space telescope, which operated from 2009 to 2018, has come up with some new predictions for how many stars in the Milky Way galaxy that are comparable to the sun in temperature and age are likely to be orbited by a rocky, potentially habitable planet like Earth. When applied to current estimates of 4.1 billion sun-like stars in the galaxy, their model suggests there are at minimum 300 million with at least one habitable planet. 

    The model’s average, however, posits that one in two sun-like stars could have a habitable planet, causing that figure to swell to over 2 billion. Even less conservative predictions suggest it could be over 3.6 billion.

    The new study has not yet been peer-reviewed, but it will be soon, and it is due to be published in the Astronomical Journal.

    “This appears to be a very careful study and deals with really thorny issues about extrapolating from the Kepler catalogue,” says Adam Frank, a physicist and astronomer at the University of Rochester, who was not involved with the study. “The goal is to get a complete, reliable, and accurate estimate for the average number of potentially habitable planets around stars. They seem to have made a good run at that.”

    Scientists have made several attempts in the past to use Kepler data to work out how many sun-like stars in the galaxy have potentially habitable exoplanets in their orbit. But these studies have provided answers that ranged from less than 1% to more than 100% (i.e., multiple planets around these stars). It’s a reflection of how hard it’s been to work with this data, says Steve Bryson of NASA Ames Research Center in California, who led the new work.

    Two major issues have created this large window: incomplete data, and the need to cull false detections from the Kepler data set.

    The new study addresses both of these problems. It’s the first of its kind to use the full Kepler exoplanet data set (more than 4,000 detections from 150,000 stars), but it’s also using stellar data from Gaia, the European Space Agency’s mission to map every star in the Milky Way. All that helped make the final estimates more accurate, with smaller uncertainties. And this is after scientists have spent years analyzing the Kepler catalogue to strip away obscuring elements and ensure that only real exoplanets are left. Armed with both Kepler and Gaia data, Bryson and his team were able to determine the rate of formation for sun-like stars in the galaxy, the number of stars likely to have rocky planets (with radiuses 0.5 to 1.5 times Earth’s), and the likelihood those planets would be habitable.

    On average, Bryson and his team predict, 37 to 60% of sun-like stars in the Milky Way should be home to at least one potentially habitable planet. Optimistically, the figure could be as high as 88%. The conservative calculations pull this figure down to 7% of sun-like stars in the galaxy (hence 300 million)—and on the basis of that number, the team predicts there are four sun-like stars with habitable planets within 30 light-years of Earth. 

    “One of the original goals of the Kepler mission was to compute exactly this number,” says Bryson. “We have always intended to do this.” 

    Habitability has to do with the chances a planet has temperatures moderate enough for liquid water to exist on the surface (since water is essential for life as we know it). Most studies figure this out by gauging the distance of an exoplanet from its host star and whether its orbit is not too close and not too far—the so-called Goldilocks zone.

    According to Bryson, orbital distance is a useful metric when you’re examining one specific star. But when you’re looking at many stars, they’ll all exhibit different brightnesses that deliver different amounts of heat to surrounding objects, which means their habitable zones will vary. The team instead chose to think about habitability in terms of the volume of light hitting the surface of an exoplanet, which the paper calls the “instellation flux.” 

    Through stellar brightness data, “we are measuring the true temperature of the planet—whether or not it is truly in the habitable zone—for all the planets around all the stars in our sample,” says Bryson. You don’t get the same sort of reliable temperature figures working with distances, he says. 

    Though Bryson claims this study’s uncertainties are smaller than those in previous efforts, they are still quite large. This is mainly because the team is working with such a small sample of discovered rocky exoplanets. Kepler has identified over 2,800 exoplanets, only some of which orbit sun-like stars. It’s not an ideal number to use to predict the existence of hundreds of millions of others in the galaxy. “By having so few observations, it limits what you can say about what the truth is,” says Bryson.

    Lastly, the new study assumes a simple model for these exoplanets that could depart dramatically from conditions in the real world (some of these stars may form  binary star systems with other stars, for example). Plugging more variables into the model would help paint a more accurate picture, but that requires more precise data that we don’t really have yet. 

    But it’s studies like these that could help us acquire that data. The whole point of Kepler was to help scientists figure out what kinds of interstellar objects they ought to devote more resources to studying to find extraterrestrial life, especially with space-based telescopes whose observation time is limited. These are the instruments (such as NASA’s James Webb Space Telescope and the ESA’s PLATO telescope) that could determine whether a potentially habitable exoplanet has an atmosphere or is home to any potential biosignatures, and studies like this latest one can help engineers design telescopes more suited to these tasks. 

    “Almost every sun-like star in the galaxy has a planet where life could form,” says Frank. “Humanity has been asking this question for more than 2,500 years, and now we not only know the answer, we are refining our knowledge of that answer. This paper tells us there are a lot of planets out there in the right place for life to form.”



    from MIT Technology Review https://ift.tt/3nbTOnN
    via IFTTT

    Why social media can’t keep moderating content in the shadows

    0

    Back in 2016, I could count on one hand the kinds of interventions that technology companies were willing to use to rid their platforms of misinformation, hate speech, and harassment. Over the years, crude mechanisms like blocking content and banning accounts have morphed into a more complex set of tools, including quarantining topics, removing posts from search, barring recommendations, and down-ranking posts in priority. 

    And yet, even with more options at their disposal, misinformation remains a serious problem. There was a great deal of coverage about misinformation on Election Day—my colleague Emily Drefyuss found, for example, that when Twitter tried to deal with content using the hashtag #BidenCrimeFamily, with tactics including “de-indexing” by blocking search results, users including Donald Trump adapted by using variants of the same tag. But we still don’t know much about how Twitter decided to do those things in the first place, or how it weighs and learns from the ways users react to moderation.

    What actions did these companies take? How do their moderation teams work? What is the process for making decisions?

    As social media companies suspended accounts and labeled and deleted posts, many researchers, civil society organizations, and journalists scrambled to understand their decisions. The lack of transparency about those decisions and processes means that—for many—the election results end up with an asterisk this year, just as they did in 2016.

    What actions did these companies take? How do their moderation teams work? What is the process for making decisions? Over the last few years, platform companies put together large task forces dedicated to removing election misinformation and labeling early declarations of victory. Sarah Roberts, a professor at UCLA, has written about the invisible labor of platform content moderators as a shadow industry, a labyrinth of contractors and complex rules which the public knows little about. Why don’t we know more? 

    In the post-election fog, social media has become the terrain for a low-grade war on our cognitive security, with misinformation campaigns and conspiracy theories proliferating. When the broadcast news business served the role of information gatekeeper, it was saddled with public interest obligations such as sharing timely, local, and relevant information. Social media companies have inherited a similar position in society, but they have not taken on those same responsibilities. This situation has loaded the cannons for claims of bias and censorship in how they moderated election-related content.  

    Bearing the costs

    In October, I joined a panel of experts on misinformation, conspiracy, and infodemics for the House Permanent Select Committee on Intelligence. I was flanked by Cindy Otis, an ex-CIA analyst; Nina Jankowicz, a disinformation fellow at the Wilson Center; and Melanie Smith, head of analysis at Graphika. 

    As I prepared my testimony, Facebook was struggling to cope with QAnon, a militarized social movement being monitored by their dangerous-organizations department and condemned by the House in a bipartisan bill. My team has been investigating QAnon for years. This conspiracy theory has become a favored topic among misinformation researchers because of all the ways it has remained extensible, adaptable, and resilient in the face of platform companies’ efforts to quarantine and remove it. 

    QAnon has also become an issue for Congress, because it’s no longer about people participating in a strange online game: it has touched down like a tornado in the lives of politicians, who are now the targets of harassment campaigns that cross over from the fever dreams of conspiracists to violence. Moreover, it’s happened quickly and in new ways. Conspiracy theories usually take years to spread through society, with the promotion of key political, media, and religious figures. Social media has sped this process through ever-growing forms of content delivery. QAnon followers don’t just comment on breaking news; they bend it to their bidding

    I focused my testimony on the many unnamed harms caused by the inability of social media companies to prevent misinformation from saturating their services. Journalists, public health and medical professionals, civil society leaders, and city administrators, like law enforcement and election officials, are bearing the cost of misinformation-at-scale and the burden of addressing its effects. Many people tiptoe around political issues when chatting with friends and family, but as misinformation about protests began to mobilize white vigilantes and medical misinformation led people to downplay the pandemic, different professional sectors took on important new roles as advocates for truth

    Take public health and medical professionals, who have had to develop resources for mitigating medical misinformation about covid-19. Doctors are attempting to become online influencers in order to correct bogus advice and false claims of miracle cures—taking time away from delivering care or developing treatments. Many newsrooms, meanwhile, adapted to the normalization of misinformation on social media by developing a “misinformation beat”—debunking conspiracy theories or fake news claims that might affect their readers. But those resources would be much better spent on sustaining journalism rather than essentially acting as third-party content moderators. 

    Civil society organizations, too, have been forced to spend resources on monitoring misinformation and protecting their base from targeted campaigns. Racialized disinformation is a seasoned tactic of domestic and foreign influence operations: campaigns either impersonate communities of color or use racism to boost polarization on wedge issues. Brandi Collins-Dexter testified about these issues at a congressional hearing in June, highlighting how tech companies hide behind calls to protect free speech at all costs without doing enough to protect Black communities targeted daily on social media with medical misinformation, hate speech, incitement, and harassment. 

    Election officials, law enforcement personnel, and first responders are at a serious disadvantage attempting to do their jobs while rumors and conspiracy theories spread online. Right now, law enforcement is preparing for violence at polling places. 

    A pathway to improve

    When misinformation spreads from the digital to the physical world, it can redirect public resources and threaten people’s safety. This is why social media companies must take the issue as seriously as they take their desire to profit. 

    But they need a pathway to improve. Section 230 of the Communications and Decency Act empowers social media companies to improve content moderation, but politicians have threatened to remove these protections so they can continue with their own propaganda campaigns. All throughout the October hearing, the specter loomed of a new agency that could independently audit civil rights violations, examine issues of data privacy, and assess the market externalities of this industry on other sectors. 

    As I argued during the hearing, the enormous reach of social media across the globe means it is important that regulation not begin with dismantling Section 230 until a new policy is in place. 

    Until then, we need more transparency. Misinformation is not solely about the facts; it’s about who gets to say what the facts are. Fair content moderation decisions are key to public accountability. 

    Rather than hold on to technostalgia for a time when it wasn’t this bad, sometimes it is worth asking what it would take to uninvent social media, so that we can chart a course for the web we want—a web that promotes democracy, knowledge, care, and equity. Otherwise, every unexplained decision by tech companies about access to information potentially becomes fodder for conspiracists and, even worse, the foundation for overreaching governmental policy.



    from MIT Technology Review https://ift.tt/2I77eCf
    via IFTTT

    Back in 2016, I could count on one hand the kinds of interventions that technology companies were willing to use to rid their platforms of misinformation, hate speech, and harassment. Over the years, crude mechanisms like blocking content and banning accounts have morphed into a more complex set of tools, including quarantining topics, removing posts from search, barring recommendations, and down-ranking posts in priority. 

    And yet, even with more options at their disposal, misinformation remains a serious problem. There was a great deal of coverage about misinformation on Election Day—my colleague Emily Drefyuss found, for example, that when Twitter tried to deal with content using the hashtag #BidenCrimeFamily, with tactics including “de-indexing” by blocking search results, users including Donald Trump adapted by using variants of the same tag. But we still don’t know much about how Twitter decided to do those things in the first place, or how it weighs and learns from the ways users react to moderation.

    What actions did these companies take? How do their moderation teams work? What is the process for making decisions?

    As social media companies suspended accounts and labeled and deleted posts, many researchers, civil society organizations, and journalists scrambled to understand their decisions. The lack of transparency about those decisions and processes means that—for many—the election results end up with an asterisk this year, just as they did in 2016.

    What actions did these companies take? How do their moderation teams work? What is the process for making decisions? Over the last few years, platform companies put together large task forces dedicated to removing election misinformation and labeling early declarations of victory. Sarah Roberts, a professor at UCLA, has written about the invisible labor of platform content moderators as a shadow industry, a labyrinth of contractors and complex rules which the public knows little about. Why don’t we know more? 

    In the post-election fog, social media has become the terrain for a low-grade war on our cognitive security, with misinformation campaigns and conspiracy theories proliferating. When the broadcast news business served the role of information gatekeeper, it was saddled with public interest obligations such as sharing timely, local, and relevant information. Social media companies have inherited a similar position in society, but they have not taken on those same responsibilities. This situation has loaded the cannons for claims of bias and censorship in how they moderated election-related content.  

    Bearing the costs

    In October, I joined a panel of experts on misinformation, conspiracy, and infodemics for the House Permanent Select Committee on Intelligence. I was flanked by Cindy Otis, an ex-CIA analyst; Nina Jankowicz, a disinformation fellow at the Wilson Center; and Melanie Smith, head of analysis at Graphika. 

    As I prepared my testimony, Facebook was struggling to cope with QAnon, a militarized social movement being monitored by their dangerous-organizations department and condemned by the House in a bipartisan bill. My team has been investigating QAnon for years. This conspiracy theory has become a favored topic among misinformation researchers because of all the ways it has remained extensible, adaptable, and resilient in the face of platform companies’ efforts to quarantine and remove it. 

    QAnon has also become an issue for Congress, because it’s no longer about people participating in a strange online game: it has touched down like a tornado in the lives of politicians, who are now the targets of harassment campaigns that cross over from the fever dreams of conspiracists to violence. Moreover, it’s happened quickly and in new ways. Conspiracy theories usually take years to spread through society, with the promotion of key political, media, and religious figures. Social media has sped this process through ever-growing forms of content delivery. QAnon followers don’t just comment on breaking news; they bend it to their bidding

    I focused my testimony on the many unnamed harms caused by the inability of social media companies to prevent misinformation from saturating their services. Journalists, public health and medical professionals, civil society leaders, and city administrators, like law enforcement and election officials, are bearing the cost of misinformation-at-scale and the burden of addressing its effects. Many people tiptoe around political issues when chatting with friends and family, but as misinformation about protests began to mobilize white vigilantes and medical misinformation led people to downplay the pandemic, different professional sectors took on important new roles as advocates for truth

    Take public health and medical professionals, who have had to develop resources for mitigating medical misinformation about covid-19. Doctors are attempting to become online influencers in order to correct bogus advice and false claims of miracle cures—taking time away from delivering care or developing treatments. Many newsrooms, meanwhile, adapted to the normalization of misinformation on social media by developing a “misinformation beat”—debunking conspiracy theories or fake news claims that might affect their readers. But those resources would be much better spent on sustaining journalism rather than essentially acting as third-party content moderators. 

    Civil society organizations, too, have been forced to spend resources on monitoring misinformation and protecting their base from targeted campaigns. Racialized disinformation is a seasoned tactic of domestic and foreign influence operations: campaigns either impersonate communities of color or use racism to boost polarization on wedge issues. Brandi Collins-Dexter testified about these issues at a congressional hearing in June, highlighting how tech companies hide behind calls to protect free speech at all costs without doing enough to protect Black communities targeted daily on social media with medical misinformation, hate speech, incitement, and harassment. 

    Election officials, law enforcement personnel, and first responders are at a serious disadvantage attempting to do their jobs while rumors and conspiracy theories spread online. Right now, law enforcement is preparing for violence at polling places. 

    A pathway to improve

    When misinformation spreads from the digital to the physical world, it can redirect public resources and threaten people’s safety. This is why social media companies must take the issue as seriously as they take their desire to profit. 

    But they need a pathway to improve. Section 230 of the Communications and Decency Act empowers social media companies to improve content moderation, but politicians have threatened to remove these protections so they can continue with their own propaganda campaigns. All throughout the October hearing, the specter loomed of a new agency that could independently audit civil rights violations, examine issues of data privacy, and assess the market externalities of this industry on other sectors. 

    As I argued during the hearing, the enormous reach of social media across the globe means it is important that regulation not begin with dismantling Section 230 until a new policy is in place. 

    Until then, we need more transparency. Misinformation is not solely about the facts; it’s about who gets to say what the facts are. Fair content moderation decisions are key to public accountability. 

    Rather than hold on to technostalgia for a time when it wasn’t this bad, sometimes it is worth asking what it would take to uninvent social media, so that we can chart a course for the web we want—a web that promotes democracy, knowledge, care, and equity. Otherwise, every unexplained decision by tech companies about access to information potentially becomes fodder for conspiracists and, even worse, the foundation for overreaching governmental policy.



    from MIT Technology Review https://ift.tt/2I77eCf
    via IFTTT

    It might not feel like it, but the election is working

    0

    The election process is working. 

    long-building “chaos” narrative being pushed by President Donald Trump suggests that the election is fatally flawed, fraud is rampant, and no institutions other than Trump himself can be trusted. There is no evidence for any of that, and as the election math increasingly turns against him, the actual election systems around America continue functioning well.

    Nothing about the 2020 elections is normal, of course, because nothing about 2020 is normal. The fact that the vote count is slower than usual is unavoidably stressful—but it’s also exactly what officials and experts have said for months would happen as every vote is counted. 

    “I think how the election process has played out has been remarkable,” says David Levine, the elections integrity fellow at the Alliance for Securing Democracy. “I think the entire country owes a tremendous gratitude to state and local election officials and those that have worked closely with them against the backdrop of foreign interference, coronavirus pandemic, civil unrest, and frankly inadequate support from the federal government. We have an election that has gone reasonably well.” 

    By any measure, the 2020 election scores better than any in recent history on security, integrity, and turnout. Election infrastructure is more secure: the Department of Homeland Security installed Albert sensors in election systems, which warn officials of intrusion by hackers, and the National Security Agency has been aggressively hunting hacking groups and handing intelligence to officials around the country. Election officials have invested in paper backup systems so they can more easily recover from technical problems.

    There are still weak points, especially with the electronic poll books used to sign voters in and with verifying results when a candidate demands a recount. But more states now have paper records as a backup to electronic voting, and more audits will take place this year than in any previous American election.

    The pandemic itself is one reason for these improvements. The increase in mail-in and early voting meant that ballots were cast over a month-long period. That helps security because activity isn’t all focused on a single day, said a CISA official in a press briefing. It gives election officials more time to deal with both normal mistakes and malicious attacks, and any problems that do arise affect fewer voters. And more Americans will want to vote this way in the future, said Benjamin Hovland, the top federal elections official and a Trump appointee.

    That means the pandemic that many feared would wreck the election has paradoxically made the system stronger. “All of that uncertainty resulted in tremendous scrutiny and transparency, and most importantly, public education about all of these administrative processes,” says Eddie Perez, an elections expert at the Open Source Election Technology Institute. 

    The calls from the president and his allies to stop vote counts can still undermine confidence in the outcome. But so far, few of Trump’s arguments have carried any weight in court. Judges denied or threw out lawsuits in Georgia and Michigan on Thursday. Even calls for recounts look unconvincing right now. Historically, recounts matter when races are within just a few hundred votes in a single state, as in the 2000 election. Right now, all of the half-dozen contested states have margins much bigger than that. 

    And while the president’s family and allies have been attacking fellow Republicans for not sufficiently supporting his efforts, several prominent party members have publicly rebuked him for his impatience, including Mitch McConnell, the Senate majority leader. “All things considered, I think that the media and the public are doing a better than average job at remaining patient and resisting inflammatory rhetoric,” says Perez.

    “This election is going remarkably well considering the obstacles election officials have faced all year long,” says Mark Lindeman, co-director of the election integrity organization Verified Voting. “Election officials in many states have had to field two entirely new election systems: massive-scale mail ballots where they have handled only a handful in the past, and also reengineering in-person voting to accommodate social distancing. There’s a chaos narrative, but what I see is not chaos. What I see is people working very hard to finish a difficult job.”

    On Thursday evening, Trump gave a rambling news conference in which he repeated his many unsubstantiated claims about fraud. Most of the news networks cut away after a minute or two. Even Fox News’s anchors said afterwards that they “hadn’t seen the evidence” for Trump’s claims. The president seemed, they said, to be readying for Biden to be declared the winner—but then to start mounting legal challenges. The counting may be over soon, but the election is far from finished.

    This is an excerpt from The Outcome, our daily email on election integrity and security. Click here to get regular updates straight to your inbox.



    from MIT Technology Review https://ift.tt/3lgrxff
    via IFTTT

    The election process is working. 

    long-building “chaos” narrative being pushed by President Donald Trump suggests that the election is fatally flawed, fraud is rampant, and no institutions other than Trump himself can be trusted. There is no evidence for any of that, and as the election math increasingly turns against him, the actual election systems around America continue functioning well.

    Nothing about the 2020 elections is normal, of course, because nothing about 2020 is normal. The fact that the vote count is slower than usual is unavoidably stressful—but it’s also exactly what officials and experts have said for months would happen as every vote is counted. 

    “I think how the election process has played out has been remarkable,” says David Levine, the elections integrity fellow at the Alliance for Securing Democracy. “I think the entire country owes a tremendous gratitude to state and local election officials and those that have worked closely with them against the backdrop of foreign interference, coronavirus pandemic, civil unrest, and frankly inadequate support from the federal government. We have an election that has gone reasonably well.” 

    By any measure, the 2020 election scores better than any in recent history on security, integrity, and turnout. Election infrastructure is more secure: the Department of Homeland Security installed Albert sensors in election systems, which warn officials of intrusion by hackers, and the National Security Agency has been aggressively hunting hacking groups and handing intelligence to officials around the country. Election officials have invested in paper backup systems so they can more easily recover from technical problems.

    There are still weak points, especially with the electronic poll books used to sign voters in and with verifying results when a candidate demands a recount. But more states now have paper records as a backup to electronic voting, and more audits will take place this year than in any previous American election.

    The pandemic itself is one reason for these improvements. The increase in mail-in and early voting meant that ballots were cast over a month-long period. That helps security because activity isn’t all focused on a single day, said a CISA official in a press briefing. It gives election officials more time to deal with both normal mistakes and malicious attacks, and any problems that do arise affect fewer voters. And more Americans will want to vote this way in the future, said Benjamin Hovland, the top federal elections official and a Trump appointee.

    That means the pandemic that many feared would wreck the election has paradoxically made the system stronger. “All of that uncertainty resulted in tremendous scrutiny and transparency, and most importantly, public education about all of these administrative processes,” says Eddie Perez, an elections expert at the Open Source Election Technology Institute. 

    The calls from the president and his allies to stop vote counts can still undermine confidence in the outcome. But so far, few of Trump’s arguments have carried any weight in court. Judges denied or threw out lawsuits in Georgia and Michigan on Thursday. Even calls for recounts look unconvincing right now. Historically, recounts matter when races are within just a few hundred votes in a single state, as in the 2000 election. Right now, all of the half-dozen contested states have margins much bigger than that. 

    And while the president’s family and allies have been attacking fellow Republicans for not sufficiently supporting his efforts, several prominent party members have publicly rebuked him for his impatience, including Mitch McConnell, the Senate majority leader. “All things considered, I think that the media and the public are doing a better than average job at remaining patient and resisting inflammatory rhetoric,” says Perez.

    “This election is going remarkably well considering the obstacles election officials have faced all year long,” says Mark Lindeman, co-director of the election integrity organization Verified Voting. “Election officials in many states have had to field two entirely new election systems: massive-scale mail ballots where they have handled only a handful in the past, and also reengineering in-person voting to accommodate social distancing. There’s a chaos narrative, but what I see is not chaos. What I see is people working very hard to finish a difficult job.”

    On Thursday evening, Trump gave a rambling news conference in which he repeated his many unsubstantiated claims about fraud. Most of the news networks cut away after a minute or two. Even Fox News’s anchors said afterwards that they “hadn’t seen the evidence” for Trump’s claims. The president seemed, they said, to be readying for Biden to be declared the winner—but then to start mounting legal challenges. The counting may be over soon, but the election is far from finished.

    This is an excerpt from The Outcome, our daily email on election integrity and security. Click here to get regular updates straight to your inbox.



    from MIT Technology Review https://ift.tt/3lgrxff
    via IFTTT

    Why counting votes in Pennsylvania is taking so long

    0

    So Election Day is over, but the election continues.

    The world’s attention has turned to a set of swing states still counting important mail-in votes, particularly Pennsylvania. So what exactly is happening today? How are counts happening? Is the election fair and secure?

    “I urge everyone to remain patient,” Pennsylvania Secretary of State Kathy Boockvar said in a press conference today, “We are going to accurately count every single ballot.” 

    “The vote count, as I’ve said many times, is never done on the day of election night. The counties are doing this accurately as quickly as they possibly can.”

    Across the state, mail-in ballots postmarked on or before Election Day are still arriving—don’t forget there have been significant postal delays—and so counting continues. The Republican state legislature declined to change Pennsylvania law, which meant that processing of over 2.5 million mail-in votes could only begin on Tuesday morning, while other states started the process much earlier. So the processing starts later, the counting starts later, and the work is greater for mail-in ballots.

    “The practical labor associated with mail-in ballots has more steps than in-person voting,” said Eddie Perez, a Texas-based election administration expert with the nonpartisan OSET Institute. But, he added, “Both in human and technology features, there’s a lot of safeguards for mail-in ballots.”

    Here’s a concise but thorough rundown of the counting, security, and integrity process right now in Pennsylvania:

    • Ballots and envelopes were sent out only to registered and verified voters who requested them.
    • Election officials receive the ballot and envelope within three days of Election Day—although this deadline may be challenged by Republicans.
    • Officials verify that each ballot is associated with the exact, eligible voter on the rolls.
    • Ballots are validated with voter records in exactly the same way as in-person votes.
    • To prevent fraud, each ballot and envelope has computer-readable codes and exact physical features like style, size, weight, and design that allow the computers to identify which specific elections, precincts, content, and additional validation information the vote applies to.
    • Signatures on the ballot envelopes are matched against a central database by bipartisan teams.
    • Envelopes are opened and paperwork removed in a specific and legally-mandated procedure.
    • Ballots that fail to pass these security measures are sent for further investigation, or for follow-up with the voter.

    Decades of history, independent study, and these extra security steps explain why mail-in ballots are not easily susceptible to fraud, and why attempts to paint them as frail are baseless disinformation, a false narrative propagated first and foremost by the president of the United States. In decades of increasing mail-in voting around the United States, widespread fraud is nonexistent.

    The Trump campaign, having now lost in the key swing state of Wisconsin, has said it will sue in Michigan and Pennsylvania to stop the ongoing counting of ballots, while falsely claiming victory despite many votes still remaining uncounted. Votes counted earlier in the process favor Trump, while the mail-in votes from Democratic areas that are still being counted are expected to favor Biden. 

    The counting in Pennsylvania could carry on through Friday.

    There is one more scenario to address. Pennsylvania automatically recounts votes if the result is within 0.5%. A loser can request and pay for a recount by going to court and alleging errors in the vote count.

    So far there is no reason to believe any such errors have occurred but, as has been said so many times, there is still a long way to go in Pennsylvania—and that means there may still be a long way to go for everyone.

    This is an excerpt from The Outcome, our daily email on election integrity and security. Click here to get regular updates straight to your inbox.



    from MIT Technology Review https://ift.tt/3k5ve64
    via IFTTT

    So Election Day is over, but the election continues.

    The world’s attention has turned to a set of swing states still counting important mail-in votes, particularly Pennsylvania. So what exactly is happening today? How are counts happening? Is the election fair and secure?

    “I urge everyone to remain patient,” Pennsylvania Secretary of State Kathy Boockvar said in a press conference today, “We are going to accurately count every single ballot.” 

    “The vote count, as I’ve said many times, is never done on the day of election night. The counties are doing this accurately as quickly as they possibly can.”

    Across the state, mail-in ballots postmarked on or before Election Day are still arriving—don’t forget there have been significant postal delays—and so counting continues. The Republican state legislature declined to change Pennsylvania law, which meant that processing of over 2.5 million mail-in votes could only begin on Tuesday morning, while other states started the process much earlier. So the processing starts later, the counting starts later, and the work is greater for mail-in ballots.

    “The practical labor associated with mail-in ballots has more steps than in-person voting,” said Eddie Perez, a Texas-based election administration expert with the nonpartisan OSET Institute. But, he added, “Both in human and technology features, there’s a lot of safeguards for mail-in ballots.”

    Here’s a concise but thorough rundown of the counting, security, and integrity process right now in Pennsylvania:

    • Ballots and envelopes were sent out only to registered and verified voters who requested them.
    • Election officials receive the ballot and envelope within three days of Election Day—although this deadline may be challenged by Republicans.
    • Officials verify that each ballot is associated with the exact, eligible voter on the rolls.
    • Ballots are validated with voter records in exactly the same way as in-person votes.
    • To prevent fraud, each ballot and envelope has computer-readable codes and exact physical features like style, size, weight, and design that allow the computers to identify which specific elections, precincts, content, and additional validation information the vote applies to.
    • Signatures on the ballot envelopes are matched against a central database by bipartisan teams.
    • Envelopes are opened and paperwork removed in a specific and legally-mandated procedure.
    • Ballots that fail to pass these security measures are sent for further investigation, or for follow-up with the voter.

    Decades of history, independent study, and these extra security steps explain why mail-in ballots are not easily susceptible to fraud, and why attempts to paint them as frail are baseless disinformation, a false narrative propagated first and foremost by the president of the United States. In decades of increasing mail-in voting around the United States, widespread fraud is nonexistent.

    The Trump campaign, having now lost in the key swing state of Wisconsin, has said it will sue in Michigan and Pennsylvania to stop the ongoing counting of ballots, while falsely claiming victory despite many votes still remaining uncounted. Votes counted earlier in the process favor Trump, while the mail-in votes from Democratic areas that are still being counted are expected to favor Biden. 

    The counting in Pennsylvania could carry on through Friday.

    There is one more scenario to address. Pennsylvania automatically recounts votes if the result is within 0.5%. A loser can request and pay for a recount by going to court and alleging errors in the vote count.

    So far there is no reason to believe any such errors have occurred but, as has been said so many times, there is still a long way to go in Pennsylvania—and that means there may still be a long way to go for everyone.

    This is an excerpt from The Outcome, our daily email on election integrity and security. Click here to get regular updates straight to your inbox.



    from MIT Technology Review https://ift.tt/3k5ve64
    via IFTTT

    Here are the main tech ballot initiatives that passed last night

    0

    While the presidential election is still in the balance, several ballot initiatives with broad implications for how we use technology have passed.

    Ballot initiatives pose questions to voters and can — if passed — create, amend, or repeal existing state law. In total, there were 129 statewide ballot initiatives across the country in this presidential election, including many around taxation and drug legalization.

    Here’s a round-up of some of the initiatives on tech policy with broader national implications, and what they might mean for consumers, privacy, and corporations. We’ll update it as more are confirmed passed over the next few days.

    California: gig workers will not become employees. 

    Proposition 22 was easily approved by California voters, meaning that gig workers for apps like Lyft, Uber, and Doordash will not become employees of those companies. Instead they will remain as independent contractors. This essentially overturns AB-5, passed last year, which would have given gig workers the same protections, like minimum wage, benefits, and compensation as other workers. The proposition also includes a provision that a majority in California’s senate is required to overturn it, making any changes very difficult. As Mary-Beth Moylan, a law professor at McGeorge School of Law in Sacramento recently noted, it is more common for ballot initiatives to require a ¾ majority to pass, rather than ⅞. 

    A consortium of tech companies, including Uber, Lyft, and Postmates, spent more than $200 million in support of it—the most spent on any California proposition. Their huge financial advantage was amplified by their access to in-app marketing, including messaging that suggested that “Yes on 22” would protect workers. In contrast, the opposition, led by labor unions, raised just short of $20 million. 

    Given the outspend, the results were somewhat expected—and both the fundraising and marketing may provide a playbook for future fights between tech companies and consumers. 

    Also California: expanded privacy protections for consumers 

    The “Consumer Personal Information Law and Agency Initiative,” a.k.a Proposition 24, also passed, expanding the state’s privacy protections for consumers. The proposition calls for the creation of a new enforcement agency for the state’s privacy laws, expanding the types of information that consumers can opt out to share with advertisers, and shifting its “do not sell” provision to “do not sell and share.” 

    The measure was actually a bit contentious among privacy rights groups, as we explained in advance of the vote:

    “Consumers would still have to opt into the protections, rather than opt out, and companies would be allowed to charge more for goods and services to make up for revenue they lose by not getting to sell data. This could make it harder for low-income and other marginalized groups to exercise their privacy rights.” 

    Massachusetts: voters approve a “right to repair” law for vehicles

    Massachusetts voters overwhelmingly said yes to Question 1, “Amend the Right to Repair Law,” which will give car owners and independent mechanics greater access to wireless vehicle data. ” A similar law had passed in Massachusetts in 2013 that required diagnostic data to be shared with independent mechanics, but it did not cover wireless data, which has become more common in the seven years since. This law would have filled in that gap.  The law is a blow to the auto manufacturers that lobbied for a no vote. They argued that this change would not give them enough time to protect he car’s security systems against hacking. 

    The law will come into effect on cars made from 2022 and it is likely that it won’t just affect Massachusetts. Auto firms, like other consumer product companies, tend to match the highest regulatory standards set by states and so consumers across the country also stand to benefit.

    Michigan: Protect electronic data from unreasonable search

    Michigan’s Proposition 22, which would require a search warrant for electronic and data and communications, will pass with wide margins. A number of states have already passed similar legislation protecting electronic data, including Missouri and New Hampshire. 

    This story will be updated when other ballot measures are confirmed to have passed.



    from MIT Technology Review https://ift.tt/3jZ0ROy
    via IFTTT

    While the presidential election is still in the balance, several ballot initiatives with broad implications for how we use technology have passed.

    Ballot initiatives pose questions to voters and can — if passed — create, amend, or repeal existing state law. In total, there were 129 statewide ballot initiatives across the country in this presidential election, including many around taxation and drug legalization.

    Here’s a round-up of some of the initiatives on tech policy with broader national implications, and what they might mean for consumers, privacy, and corporations. We’ll update it as more are confirmed passed over the next few days.

    California: gig workers will not become employees. 

    Proposition 22 was easily approved by California voters, meaning that gig workers for apps like Lyft, Uber, and Doordash will not become employees of those companies. Instead they will remain as independent contractors. This essentially overturns AB-5, passed last year, which would have given gig workers the same protections, like minimum wage, benefits, and compensation as other workers. The proposition also includes a provision that a majority in California’s senate is required to overturn it, making any changes very difficult. As Mary-Beth Moylan, a law professor at McGeorge School of Law in Sacramento recently noted, it is more common for ballot initiatives to require a ¾ majority to pass, rather than ⅞. 

    A consortium of tech companies, including Uber, Lyft, and Postmates, spent more than $200 million in support of it—the most spent on any California proposition. Their huge financial advantage was amplified by their access to in-app marketing, including messaging that suggested that “Yes on 22” would protect workers. In contrast, the opposition, led by labor unions, raised just short of $20 million. 

    Given the outspend, the results were somewhat expected—and both the fundraising and marketing may provide a playbook for future fights between tech companies and consumers. 

    Also California: expanded privacy protections for consumers 

    The “Consumer Personal Information Law and Agency Initiative,” a.k.a Proposition 24, also passed, expanding the state’s privacy protections for consumers. The proposition calls for the creation of a new enforcement agency for the state’s privacy laws, expanding the types of information that consumers can opt out to share with advertisers, and shifting its “do not sell” provision to “do not sell and share.” 

    The measure was actually a bit contentious among privacy rights groups, as we explained in advance of the vote:

    “Consumers would still have to opt into the protections, rather than opt out, and companies would be allowed to charge more for goods and services to make up for revenue they lose by not getting to sell data. This could make it harder for low-income and other marginalized groups to exercise their privacy rights.” 

    Massachusetts: voters approve a “right to repair” law for vehicles

    Massachusetts voters overwhelmingly said yes to Question 1, “Amend the Right to Repair Law,” which will give car owners and independent mechanics greater access to wireless vehicle data. ” A similar law had passed in Massachusetts in 2013 that required diagnostic data to be shared with independent mechanics, but it did not cover wireless data, which has become more common in the seven years since. This law would have filled in that gap.  The law is a blow to the auto manufacturers that lobbied for a no vote. They argued that this change would not give them enough time to protect he car’s security systems against hacking. 

    The law will come into effect on cars made from 2022 and it is likely that it won’t just affect Massachusetts. Auto firms, like other consumer product companies, tend to match the highest regulatory standards set by states and so consumers across the country also stand to benefit.

    Michigan: Protect electronic data from unreasonable search

    Michigan’s Proposition 22, which would require a search warrant for electronic and data and communications, will pass with wide margins. A number of states have already passed similar legislation protecting electronic data, including Missouri and New Hampshire. 

    This story will be updated when other ballot measures are confirmed to have passed.



    from MIT Technology Review https://ift.tt/3jZ0ROy
    via IFTTT

    We just found a source for one of the most mysterious phenomena in astronomy

    0

    Fast radio bursts are among the strangest mysteries in space science. These pulses last less than five milliseconds but release more energy than the sun does in days or weeks. Since they were first recorded in 2001 (and written about in 2007), scientists have discovered dozens of FRBs. Most are one-off signals, but a few repeat, including one that beats at a regular tempo

    But no one has ever been able to explain what exactly produces FRBs. Before now, only five had been localized to specific regions in space, and they all originated outside our galaxy. When a signal comes from so far away, it’s very hard to find the object responsible for producing it. Most theories have focused on cosmic collisions or neutron stars. And also, well, aliens

    Spoiler alert: it’s not aliens. Two new studies published in Nature today strongly suggest that magnetars—highly magnetized neutron stars—are one source of FRBs. The studies also indicate that these bursts are probably much more common than we imagined. 

    “I don’t think we can conclude that all fast radio bursts come from magnetars, but for sure models that suggest magnetars as an origin for fast radio bursts are very probable,” says Daniele Michilli, an astrophysicist from McGill University and a coauthor of the first Nature study

    The new findings focus on an FRB detected on April 28 by two telescopes: CHIME (the Canadian Hydrogen Intensity Mapping Experiment, based in British Columbia) and STARE2 (an array of three small radio antennas located throughout California and Utah). The signal, dubbed FRB 200428, released more energy in radio waves in one millisecond than the sun does in 30 seconds. 

    It’s par for the course for CHIME to find FRBs—it’s found dozens, and in the future the telescope might be able to detect a burst every day. But even though STARE2 was specifically designed to look for FRBs within the galaxy, at lower sensitivities than most other instruments, few expected it to succeed. When it became operational last year, the team predicted a 10% chance it would actually find a signal in the Milky Way. 

    Then—it happened. “When I first looked at the data for the first time, I froze,” says Christopher Bochenek, a Caltech graduate student in astronomy, who leads the STARE2 project and is the lead author of the second Nature study. “It took me a few minutes to collect myself and make a call to a friend to actually sit down and make sure this thing was actually real.” Between STARE2 and CHIME, this burst was seen by five radio telescopes across North America. 

    Those observations just happened to coincide with an incredibly bright flash emanating from a highly magnetized neutron star—a magnetar—called SGR J1935+2154, which was located 30,000 light-years from Earth near the center of the Milky Way galaxy. 

    This magnetar, which is about 40 to 50 times more massive than the sun, produces intense bouts of electromagnetic radiation, including x-rays and gamma rays. Its magnetic fields are so strong that they squish nearby atoms into pencil-like shapes. 

    Magnetars have always been a suspected source of FRBs, but it’s been difficult for astrophysicists to confirm this, since all other signals came from outside of the Milky Way. 

    Researchers compared the radio waves of FRB 200428 with x-ray observations made by six space telescopes, as well as other ground-based observatories. Those x-ray emissions pointed to SGR J1935+2154, which flashed 3,000 times brighter than any other magnetar on record. 

    The CHIME and STARE2 teams deduced that this particular magnetar was responsible for the energetic event that produced not only the bright x-ray emissions but FRB 200428 as well. It’s the first time such a burst has ever been discovered inside the Milky Way, and this FRB emits more energy than any other source of radio waves detected in the galaxy. 

    FRB 200428 is only a 30th as strong as the weakest extra-galactic FRB on record, and one-thousandth the strength of the average signal. So the fact that STARE2 recorded it after just about a year in operation is a strong indication that these signals are bouncing around the galaxy more frequently than scientists realized. 

    A counterpoint to these new findings comes from FAST, the Five-hundred-meter Aperture Spherical Telescope, located in southwest China. FAST is the largest single-dish radio telescope in the world. It can’t survey large swaths of the sky, but it can peer narrowly to look for faint signals in places very far away.

    FAST studied SGR J1935+2154 for a total of eight hours across four observational sessions from April 16 to 29, according to a third Nature study. And it found no radio waves that coincided with any known x-ray or gamma-ray bursts that happened during that time. 

    That report doesn’t necessarily nix the magnetar explanation, especially since FAST wasn’t observing during the moment that FRB 200428 was detected. But it does suggest that a magnetar emitting an FRB, if confirmed, is a very rare event, and one that produces radio signals we have yet to fully characterize.

    Sandro Mereghetti, an astronomer with the National Institute of Astrophysics in Milan, helped lead the SGR J1935+2154 x-ray detections made by the European Space Agency’s INTEGRAL telescope (International Gamma-Ray Astrophysics Laboratory). Though he believes the discovery “strongly favors the class of FRB models based on magnetars,” he points out that “the particular physical processes leading to the observed bursts of radio and hard x-ray emission are not settled yet.” In other words, we don’t know what exactly happens inside a magnetar that would produce FRBs along with associated x rays or gamma rays. 

    “I would not say that the mystery of FRBs has been solved,” says Mereghetti. “But this is certainly a big step forward that also opens prospects for other similar detections.”



    from MIT Technology Review https://ift.tt/2I3FPB6
    via IFTTT

    Fast radio bursts are among the strangest mysteries in space science. These pulses last less than five milliseconds but release more energy than the sun does in days or weeks. Since they were first recorded in 2001 (and written about in 2007), scientists have discovered dozens of FRBs. Most are one-off signals, but a few repeat, including one that beats at a regular tempo

    But no one has ever been able to explain what exactly produces FRBs. Before now, only five had been localized to specific regions in space, and they all originated outside our galaxy. When a signal comes from so far away, it’s very hard to find the object responsible for producing it. Most theories have focused on cosmic collisions or neutron stars. And also, well, aliens

    Spoiler alert: it’s not aliens. Two new studies published in Nature today strongly suggest that magnetars—highly magnetized neutron stars—are one source of FRBs. The studies also indicate that these bursts are probably much more common than we imagined. 

    “I don’t think we can conclude that all fast radio bursts come from magnetars, but for sure models that suggest magnetars as an origin for fast radio bursts are very probable,” says Daniele Michilli, an astrophysicist from McGill University and a coauthor of the first Nature study

    The new findings focus on an FRB detected on April 28 by two telescopes: CHIME (the Canadian Hydrogen Intensity Mapping Experiment, based in British Columbia) and STARE2 (an array of three small radio antennas located throughout California and Utah). The signal, dubbed FRB 200428, released more energy in radio waves in one millisecond than the sun does in 30 seconds. 

    It’s par for the course for CHIME to find FRBs—it’s found dozens, and in the future the telescope might be able to detect a burst every day. But even though STARE2 was specifically designed to look for FRBs within the galaxy, at lower sensitivities than most other instruments, few expected it to succeed. When it became operational last year, the team predicted a 10% chance it would actually find a signal in the Milky Way. 

    Then—it happened. “When I first looked at the data for the first time, I froze,” says Christopher Bochenek, a Caltech graduate student in astronomy, who leads the STARE2 project and is the lead author of the second Nature study. “It took me a few minutes to collect myself and make a call to a friend to actually sit down and make sure this thing was actually real.” Between STARE2 and CHIME, this burst was seen by five radio telescopes across North America. 

    Those observations just happened to coincide with an incredibly bright flash emanating from a highly magnetized neutron star—a magnetar—called SGR J1935+2154, which was located 30,000 light-years from Earth near the center of the Milky Way galaxy. 

    This magnetar, which is about 40 to 50 times more massive than the sun, produces intense bouts of electromagnetic radiation, including x-rays and gamma rays. Its magnetic fields are so strong that they squish nearby atoms into pencil-like shapes. 

    Magnetars have always been a suspected source of FRBs, but it’s been difficult for astrophysicists to confirm this, since all other signals came from outside of the Milky Way. 

    Researchers compared the radio waves of FRB 200428 with x-ray observations made by six space telescopes, as well as other ground-based observatories. Those x-ray emissions pointed to SGR J1935+2154, which flashed 3,000 times brighter than any other magnetar on record. 

    The CHIME and STARE2 teams deduced that this particular magnetar was responsible for the energetic event that produced not only the bright x-ray emissions but FRB 200428 as well. It’s the first time such a burst has ever been discovered inside the Milky Way, and this FRB emits more energy than any other source of radio waves detected in the galaxy. 

    FRB 200428 is only a 30th as strong as the weakest extra-galactic FRB on record, and one-thousandth the strength of the average signal. So the fact that STARE2 recorded it after just about a year in operation is a strong indication that these signals are bouncing around the galaxy more frequently than scientists realized. 

    A counterpoint to these new findings comes from FAST, the Five-hundred-meter Aperture Spherical Telescope, located in southwest China. FAST is the largest single-dish radio telescope in the world. It can’t survey large swaths of the sky, but it can peer narrowly to look for faint signals in places very far away.

    FAST studied SGR J1935+2154 for a total of eight hours across four observational sessions from April 16 to 29, according to a third Nature study. And it found no radio waves that coincided with any known x-ray or gamma-ray bursts that happened during that time. 

    That report doesn’t necessarily nix the magnetar explanation, especially since FAST wasn’t observing during the moment that FRB 200428 was detected. But it does suggest that a magnetar emitting an FRB, if confirmed, is a very rare event, and one that produces radio signals we have yet to fully characterize.

    Sandro Mereghetti, an astronomer with the National Institute of Astrophysics in Milan, helped lead the SGR J1935+2154 x-ray detections made by the European Space Agency’s INTEGRAL telescope (International Gamma-Ray Astrophysics Laboratory). Though he believes the discovery “strongly favors the class of FRB models based on magnetars,” he points out that “the particular physical processes leading to the observed bursts of radio and hard x-ray emission are not settled yet.” In other words, we don’t know what exactly happens inside a magnetar that would produce FRBs along with associated x rays or gamma rays. 

    “I would not say that the mystery of FRBs has been solved,” says Mereghetti. “But this is certainly a big step forward that also opens prospects for other similar detections.”



    from MIT Technology Review https://ift.tt/2I3FPB6
    via IFTTT
     

    My Blog List