“Once sanctions have been terminated, Iran can move forward in using ICT to transform industries across the country,” said Lalchandani. “The extent of these projects will depend largely on whether global oil prices rebound in the coming years. If they do, the increase in petrodollar revenues will help drive considerable transformation initiatives in the public sector, as well as significant modernisation efforts across the energy, manufacturing, telecommunications, finance, transportation and retail verticals.” Meera Kaul, CEO at regional value-added IT distributor Optimus Technologyand Telecoms, agreed, saying this is a big opportunity for the regional IT supply channel. With the sanctions lifted, the $420bn Iranian economy could open up for regional businesses, she said.
Less than a quarter of respondents to the latest Tech Pro Research survey said their company is currently using IoT-connected devices to collect data, but more respondents said their business plans to get into the IoT game within the next year. Respondents in those two groups reported a wide variety of uses for data insights, including predicting trends, improving products, capacity planning, R&D and security. Among respondents whose companies who have implemented IoT data collection, 71% said that less than 20% of their IT budget goes towards those efforts, with the majority spending most of their IoT funds on software.
According to the SEC's Office of Compliance Inspections and Examinations, other areas of focus include governance and risk assessment, access controls, data loss prevention, training, and incident response. "We expect continued scrutiny of the areas covered in past years, with new emerging risk areas being evaluated," said Glenn Siriano, financial services leader for KPMG Cyber at KPMG. Those new areas include emerging technologies, new external threat vectors, deeper assessments of third-party vendors, usage of social media, and managing insider threats, he said. And the SEC has been moving beyond conducting inspections and issuing guidance, said Dave Mahon, CSO at CenturyLink.
“No doubt VR will help to create buzz among media, gamers and the niche audience demanding immersive experiences,” said Husson. “But will it offer consumer benefits for the masses? The short answer is: no. In 2016, reach for VR platforms will remain limited. “While the primary use cases will be for immersive gaming and entertainment environments, innovative marketers at retail, automotive, travel or hospitality companies will start piloting VR prototypes to connect in new ways with consumers in the discovery and explore phases of the consumer lifecycle. The vast majority of marketers should not even care about it and have many other things to fix.”
It’s no secret that “data scientist” is one of the hottest job titles going. DJ Patil famously proclaimed data scientist “The Sexiest Job of the 21st Century” before moving on to join the White House as the first chief data scientist of the U.S. Once a rarefied in-house role at a few leading Internet companies such as LinkedIn and PayPal, data science has since grown into a global phenomenon, impacting organizations of all sizes across many industries. More recently, a buzzy new job title has emerged from the same group of companies: that of site reliability engineer, or SRE. Will SREs follow the same path of rapid growth that data scientists did before them? Before we dive into that question, let’s consider the context that has led to the creation of site reliability engineering.
"The question is, how do we use data to allow cities to tackle big and small problems?" says Saf Rabah, VP of product at Socrata. Untouched and unanalyzed government data—what Rabah calls "dark data"—usually sits on enterprise file systems and databases. The city of Seattle, for example, has 1,200 different enterprise systems, says Rabah. Socrata’s job is to make that data usable—but not just for the city. Aside from other government departments, there are three groups that could benefit from data made public: citizens, developers, and advocacy groups. "Citizens have information needs too, like, ‘I need to know how safe my neighborhood is,’ or ‘I’m about to move to a new city.’ Everyone has information needs that are very unique to them at that point in time," says Rabah.
It keeps important data safe from disruptions and disasters, and provides a way to keep applications and data off-site in highly secured environment. There are great advantages to using backup technology, such as automation functionality and encrypted data. There are some business experts who state that the cloud is not a secure source for important data. However, online backups have encryption capacity to keep data safe. Conversely, hard drive (external) storage is not secure, and could be stolen or misplaced. Online backup is also reasonably priced. By using online backup, companies are given an opportunity to keep important files and documents safe from disarray and disaster at reasonable rate.
You have your impulse self (reptilian brain and limbic system) and a rational self that protects you from that impulse self (see “The Science of Willpower”). Your prefrontal cortex protects you from your impulsive animal mind. But because the deeper layers of brain are older, more energy efficient, and more powerful, the impulse self has more energy than the rational self. You cannot switch off your internal crocodile or monkey. You can only use the neocortex to override them and prioritize rational decisions. But if you are drunk, tired, sleep deprived or distracted, your prefrontal cortex does not work properly. You start making decisions based on immediate gratification (like drinking coffee with sugar to gain energy), not thinking about what will happen next.
Just detecting anomalies can still leave you with a lot of data to look at. A large organization could see thousands of anomalies a day, so Splunk uses further analysis to keep that manageable. Maier expects the tool to surface five or 10 threats a day, in enough detail to make it clear what’s happening (avoiding the problem where noisy or overly complex alerting systems are ignored when they find a real breach). “We have the full picture on the ‘kill chain’ [of the attack]. We provide a security organization with the information, from the compromise point – when did the attacker come in, what was the initial attack vector, when did they expand in this environment, what other files or servers or user accounts did they connect to?
The assignment problem is designed for exactly this purpose. We start with m agents and n tasks. We make the rule that every agent has to be assigned to a task. For each agent-task pair, we figure out a cost associated to have that agent perform that task. We then figure out which assignment of agents to tasks minimizes the total cost. Of course, it may be true that m != n, but that's OK. If there are too many tasks, we can make up a "dummy" agent that is more expensive than any of the others. This will ensure that the least desirable task will be left to the dummy agent, and we can remove that from the solution. Or, if there are too many agents, we can make up a "dummy" task that is free for any agent. This will ensure that the agent with the highest true cost will get the dummy task, and will be idle.
Quote for the day:
"It's not always necessary to be strong, but to feel strong." -- Jon Krakauer,