Daily Tech Digest - January 14, 2017

Are these the gadgets most likely to change our lives in 2017?

The self-driving car has made significant progress in the last couple of years. That’s not surprising given that it has the potential to solve congestion, accident prevention and reducing carbon emissions. The pioneers come from the new economy: Google, Uber and Tesla all have partially or fully independent vehicles. However, most of the major car makers have plans to introduce autonomous vehicles by the early 2020s, and were showing off both concepts and future self-driving models at CES. Collaborations with leading technology companies - chip makers NVIDIA and Intel, Samsung and Apple, for instance – were high on the agenda, as was the need for reassurance on the safety of driverless cars. In this context, providing the computing ‘horsepower’ and depth of information needed so a car can assess its environment and make decisions were a major focus for exhibitors in Las Vegas.


WhatsApp’s Small Security Flaw Is the Price of Convenience

According to a new report by the Guardian, WhatsApp has a flaw that could, in theory, allow the company to read messages that users assume are safe from prying eyes. Tobias Boelter, a security researcher at the University of California, Berkeley, tells the newspaper that WhatsApp can force a device to generate a new encryption key when a user is offline. Then, if someone is sending a message to that device while it’s offline, the sender will be made to re-encrypt the messages and resend them. Those messages could, says Boelter, be read by WhatsApp. And, presumably, by anyone who demanded the company turn them over, too. WhatsApp knows this is the case, and it is unapologetic about it. It has a compelling argument: convenience.


An Interview with Dr. David Bray and Michael Krigsman on Ethics and AI.

The ethical aspects of AI center on development, use, and application. AI offers its maker advanced capabilities that can be applied to fields as diverse as robotics, medicine, autonomous vehicles, weapons, and much more. As with any technology, the developer’s goals and objectives dictate how AI technology is used and in what fields it is applied. Given the power of AI to mimic human decisions and intelligence, the question of application is crucial to consider. For example, imagine AI technologies in the hands of a government planning to identify and target specific populations or groups for attack or discrimination. Most people would say this is an unethical use of AI. What about companies using AI to target consumers with levels of personalization unattainable today. At what point do we cross the line between appropriate and inappropriate use?


Your selfies might be leaving you vulnerable to hackers

According to research from a team at Japan’s National Institute of Informatics (NII), cyber thieves can lift your fingerprints from a photo in order to access your biometrically protected data (like the info secured on your iPhone by the Touch ID system). But while it's technically possible, biometrics experts say there's no need to panic. The NII team's report focuses on the personal security threats posed for social media users who share lots of publicly accessible pictures. Using a set of photos taken by a camera placed about three meters away from a subject, the team was able to recreate the fingerprints accurately.  The Japan Times reports that NII researcher Isao Echizen told Sankei Shimbun, a Japanese language newspaper, that peace signs could be exploited without much effort. “Just by casually making a peace sign in front of a camera, fingerprints can become widely available,” he told the paper.


Twitter CMO finally explains the purpose of Twitter

As Berland and her colleagues set out to clarify just what Twitter is and why it exists, they landed on the most obvious definition of all. "Twitter is the place to see what's happening," she said. "We've been asking the same question from you for years and years. We've been searching and searching, and the answer was staring in front of us all along." That central question — "what's happening?" — appears right in Twitter's main compose field. "The first thing we did is we actually took ourselves out of the social networking category in the app stores and we put ourselves where we belong, which is news," Berland said. "As we were telling the story about us being in the center of what's happening in the world, reflecting on what's happening in the world, there was in fact a lot happening in the world right here on Twitter," she said.


You should read this super-interesting AMA with AI researcher Joanna Bryson

There are two things that humans do that are opposites: anthropomorphizing and dehumanizing. I’m very worried about the fact that we can treat people like they are not people, but cute robots like they are people…We are used to applying ethics to stuff that we identify with, but people are getting WAY good at exploiting this and making us identify with things we don’t really have anything in common with at all. Even if we assumed we had a robot that was otherwise exactly like a human (I doubt we could build this, but let’s pretend like Asimov did), since we built it, we could make sure that it’s “mind” was backed up constantly by wifi, so it wouldn’t be a unique copy. We could ensure it didn’t suffer when it was put down socially. We have complete authorship. So my line isn’t “torture robots!” My line is “we are obliged to build robots we are not obliged to.”


“OK Facebook”—Why stop at assistants? Facebook has grander ambitions for modern AI

On the road to this human-like intelligence, Facebook will use machine learning (ML), a branch of artificial intelligence (AI), to understand all the content users feed into the company’s infrastructure. Facebook wants to use AI to teach its platform to understand the meaning of posts, stories, comments, images, and videos. Then with ML, Facebook stores that information as metadata to improve ad targeting and increase the relevance of user newsfeed content. The metadata also acts as raw material for creating an advanced conversational agent. These efforts are not some far-off goal: AI is the next platform for Facebook right now. The company is quietly approaching this initiative with the same urgency as its previous Web-to-mobile pivot. 


One Startup’s Vision to Reinvent the Web for Better Privacy

Blockstack’s vision is made possible by an identity system built to be independent of any one company, including the startup itself. It uses the digital ledger, or blockchain, underpinning the digital currency Bitcoin to track usernames and associated encryption keys that allow a person to control his or her data and identity. A collective of thousands of computers around the globe maintains the blockchain, and no one entity controls it. Blockstack’s system uses the blockchain to record domain names, too, meaning there’s no need for an equivalent to ICANN, the body that oversees Web domains today. Software built on top of the name and ID systems gives people control over the data they let online services use. Microsoft is already collaborating with Blockstack to explore uses for its platform.


Developing Transactional Microservices Using Aggregates, Event Sourcing and CQRS

On the surface, using events to maintain consistency between aggregates seems quite straightforward. When a service creates or updates an aggregate in the database it simply publishes an event. But there is a problem: updating the database and publishing an event must be done atomically. Otherwise, if, for example, a service crashed after updating the database but before publishing an event then the system would remain in an inconsistent state. The traditional solution is a distributed transaction involving the database and the message broker. But, for the reasons described earlier in part 1, 2PC is not a viable option. ... A message consumer that subscribes to message broker eventually updates the database. This approach guarantees that the database is updated and the event is published. The drawback is that it implements a much more complex consistency model.


Is this the year IoT standards will finally make sense?

There’s too much at stake in a potentially huge market for major companies to give up the chance to dominate home IoT, Greengart said. “I’m highly skeptical that 'co-opetition' in this regard will prevail over competition. And given than nobody knows what layer of the stack is going to be the most valuable one, everyone is fighting for their own,” he said. The common thread that will make smart homes work may turn out to be a system from one vendor, like Apple’s HomeKit, Greengart said. Apple is as well-positioned as any company to make that happen. But even though many manufacturers at last week’s CES show introduced products that use HomeKit, they didn’t play up that capability much, he said. Alexa, Amazon’s cloud-based AI platform that made a splash at CES, at least provides a single user interface, though Greengart said it’s not really a full IoT platform like HomeKit -- yet.



Quote for the day:


"It is what we make out of what we have, not what we are given, that separates one person from another." -- Nelson Mandela


No comments:

Post a Comment