October 29, 2016

Tech Bytes - Daily Digest: October 29, 2016

Former NSA leader talks Snowden & the future of infosec, How big data can improve student performance and learning approaches, What is data quality & how do you measure it for best results, How economists view the rise of Artificial Intelligence, Companies complacent about data breach preparedness, How much does a data breach cost and more.

19 psychological tricks that will help you ace a job interview

So if the hiring manager offers you some flexibility in choosing an interview time, ask if you could come in around 10:30 a.m. on a Tuesday. That's likely when your interviewer is relatively relaxed. In general, you should avoid early-morning meetings because your interviewer may still be preoccupied with everything she needs to get done that day. You'll also want to avoid being the last meeting of the workday, as your interviewer may already be thinking about what they need to accomplish at home. ... Twenty-three percent of interviewers recommended wearing blue, which suggests that the candidate is a team player, while 15% recommended black, which suggests leadership potential. Meanwhile, 25% said orange is the worst color to wear, and suggests that the candidate is unprofessional.


Former NSA leader talks Snowden and the future of infosec

When it was put to Inglis that Snowden might be viewed as a whistle-blower acting with the intent to take a stand on the right of citizens to data privacy, Inglis said: “I don’t think he thought that. Whistle-blowers should be formally supported and within the US system they are. You have the right and authority to take [your concerns] to some other places … Snowden did none of that – he made no complaints to anyone ... [He] recklessly released information that had nothing to do with the protection of privacy.” Snowden helped to “fill the vacuum of information [about how the NSA works],” he said, and “a lot of the cost was a vilification of the NSA”.


Uber’s New Goal: Flying Cars in Less Than a Decade

In fact, Uber reckons that the technology for these kinds of vehicles will mature within five years. Google cofounder Larry Page seems to agree: earlier this year he invested in two flying-car companies. But there are still some significant wrinkles that need to be ironed out before that happens, which make the five-year time frame seem overly optimistic. To be fair, Uber realizes there are hurdles. In its white paper, Uber lists a number of issues it’s worried about (deep breath): battery technology, vehicle efficiency, vehicle performance and reliability, cost and affordability, safety, aircraft noise, emissions, takeoff and landing infrastructure, pilot training, air traffic control, and the certification process.


How Big Data Can Improve Student Performance And Learning Approaches

The data-based approach periodically tracks an individual student’s performance by using indicators such as: prior knowledge, level of academic ability, and individual interests. What this approach achieves is that it allows for personalized learning where the students can actively learn at their own pace. Furthermore, educators can provide their support, tools, and assistance to those students who need their attention in the classroom. One of the highly regarded platforms that features personalized courses and exercises is Khan Academy. Aside from the fact that it can be used by students and parents, this platform also allows teachers to provide individualized video tutorials and practices in many different subjects, predominately math. Specifically, teachers can modify tutorials and playlists and recommend certain videos and exercises to students.


China’s Baidu to open-source its deep learning AI platform

The announcement follows the open-sourcing in the last two years of other machine intelligence and deep learning tools such as Torch and machine-vision technology from Facebook, TensorFlow from Google, Computation Network Tool Kit (CNTK) from Microsoft and DSSTNE from Amazon.com, as well as independent open source frameworks such as Caffe. Baidu also has open-sourced other pieces of its AI code. But Xu Wei, the Baidu distinguished scientist who led PaddlePaddle’s development, said this software is intended for broader use even by programmers who aren’t experts in deep learning, which involves painstaking training of software models. “You don’t need to be an expert to quickly apply this to your project,” Xu said in an interview. “You don’t worry about writing math formulas or how to handle data tasks.”


What is Data Quality and How Do You Measure It for Best Results?

What do we do when we find errors or issues? Typically, you can do one of four things: Accept the Error – If it falls within an acceptable standard (i.e. Main Street instead of Main St) you can decide to accept it and move on to the next entry; Reject the Error – Sometimes, particularly with data imports, the information is so severely damaged or incorrect that it would be better to simply delete the entry altogether than try to correct it; Correct the Error – Misspellings of customer names are a common error that can easily be corrected. If there are variations on a name, you can set one as the “Master” and keep the data consolidated and correct across all the databases; and Create a Default Value – If you don’t know the value, it can be better to have something there than nothing at all.


How Economists View the Rise of Artificial Intelligence

“Economists think of technology as drops in the cost of particular things,” Agrawal said. Likewise, the advent of calculators or rudimentary computers lowered the cost for people to perform basic arithmetic, which aided workers at the census bureau who previously slaved away for hours manually crunching data without the help of those tools. Similarly, with the rise of digital cameras, improvements in software and hardware helped manufacturers run better internal calculations within the device that could help users capture and improve their digital photos. Researchers essentially applied calculations to the old-school field of photography, something previous generations probably never believed would be touched by math, he explained.


Companies complacent about data breach preparedness

even as organizations are paying more attention to data breach preparedness, most aren't giving it the attention needed to execute their plans successfully when the time comes. Ponemon found that 38 percent of organizations have no set time period for reviewing and updating their plan and 29 percent have not reviewed or updated their plan since it was first put in place. Only 27 percent of organizations surveyed felt confident in their ability to minimize the financial and reputational consequences of a breach, and 31 percent lacked confidence in dealing with an international incident. For instance, in April, Symantec released its 2016 Internet Security Threat Report, which found that ransomware increased by 35 percent in 2015.


Protection is dead. Long live detection.

All too often, detection is an afterthought. A lot of planning and money go toward hardening protections, and then an intrusion detection system or a security information and event monitoring system is tacked on. It’s not enough. Detection strategy and architecture have to be the equal of protection strategy and architecture. If most organizations were already treating protection and detection equally, attackers would not be spending an average of 200 days inside target systems or networks before being detected. More than six months is plenty of time for adversaries to fully achieve their goals, plus explore, define new goals and find new targets. Don’t misunderstand. As essential as detection is, it is not necessarily a fail-safe. But the sooner a breach is detected, the sooner you can mount a defense and stop adversaries from achieving their goal, or at least minimize the damage.


How much does a data breach actually cost?

Knowing these numbers gives one a sense of how to measure their relative size. Because when it comes to measuring the cost of a data breach, size matters. It’s intuitive and true—the more records lost, the higher the cost. According to the same Ponemon study, the average cost of a data breach involving fewer than 10,000 records was nearly $5 million, while a breach of more than 50,000 records had an average cost of $13 million. Reviewing the numbers, it’s clear data breaches are a real and growing financial threat to businesses. The good news is it is a cost that can be avoided with a proactive investment in cybersecurity measures. Knowing the potential and average cost also gives business owners an idea of how much to budget to secure their information.



Quote for the day:


"How we think shows through in how we act. Attitudes are mirrors of the mind. They reflect thinking." -- David Joseph Schwartz


No comments:

Post a Comment