Two things emerge from this - structuring and transforming data on ingestion incurs a performance hit, and potential data loss. If we try to do complex computations on a large amount of incoming data, we will most likely have serious performance issues. If we try to structure data on ingestion, we might realize later on that we need pieces of data discarded during structuring. The thing is, with vast and complex data, it is most likely that we won’t know what insights you can extract from it. We don’t know what value, if any, collected data will bring to our business. If we try to guess, there is a fair chance of guessing wrong. What do we do then? We store raw data. Now, we don’t want to just throw it in there, as that will lead to a data swamp - a pool of stale data, without any information on what it represents. Data should be enriched with metadata, describing its origin, ingestion time, etc. We can also partition data on ingestion, which makes processing more efficient later on. If we don’t get the right partitioning on the first try, we’ll still have all of it, and can re-partition it without any data loss.
The Drive.ai program lacks the ambition of, say, Waymo’s Phoenix-area AV service, which ferries passengers around without a human driver ready to take the wheel, but the project—publicly announced, small in scope, conducted in partnership with city officials—seems to take a more measured approach to AV testing than exists elsewhere. Bamonte described Frisco’s AV program as “kind of crawl, walk, run.” “We don’t want developers to just plop down unannounced and start doing a service,” Bamonte told me. He compared the Drive.ai testing favorably to Tesla’s, whose roadster has ambitious autopilot features that have already been deployed in thousands of commercial vehicles, wherever drivers take them. So far, Tesla’s autopilot mode has caused several high-profile crashes on public roads, including fatal accidents in Florida and California. The Uber crash has “added note of caution,” Bamonte said, but “it’s our responsibility to continue to explore and test this technology in a responsible way.” For him, that means closed tracks and computer simulations; after conducting a public education campaign and soliciting feedback, deployment on public streets will inevitably follow.
"Alongside the convergence of activities and systems, with IoT there's all sorts of expansion, the perimeter also disappears," says Gartner's Contu. With this, business risk is fast becoming the responsibility of the whole organisation, not just a small dedicated section of the organisation. "Organisations need to take a business-driven security approach, which encourages all stakeholders to be engaged in the risk conversation, identifying what matters most to them, so threats can be tackled in a way that safeguards what's most important -- whether that's customer data, intellectual property or another business-critical asset," said Knowles. IT, security, application builders, developers, DevOps operations and more: all of these parts of the organisation need to be thinking about business risk on a day-to-day basis -- and what they need to think about is constantly changing. "That's a critical part of thinking about a risk-based model: it's not static, it's not something you have consultants looking at; it should be instrumented and refined over time and changing depending on what you see," said Toubba, who adds how information on cyber threats should also be continually updated in this way.
Privacy concerns remain; employees will want to avoid mingling their personal lives with their private lives, but this concern isn’t new. Still, it will be up to employees and perhaps unions to ensure an individual’s right to privacy remains intact, a task that can become more challenging as more employees find themselves constantly connected to the office. Airports can rely on facial recognition technology tied with other smart technologies to detect potential security risks. Smart features can lead to better convenience as well; smart technology combined with artificial intelligence can provide useful metrics on airport crowds, which can be used to ease the process of traveling through airports. Airports and other high-risk security areas aren’t the only space that can benefit from this enhanced security; sports stadiums, for example, might be able to provide a safer experience with smart technology. These technologies, however, can easily encroach on an individual’s privacy by attaching activities in these spaces with a name and other identifying information. People might be willing to trade in some privacy for easier check-ins, but having information stored indefinitely might cause concerns.
Blockchain technology should be leveraged in the reinsurance process to increase interoperability. With a shared digital ledger, no longer can there be the discrepancy in data format, process, and standards that currently plague the industry. A permissioned blockchain ledger can be used to streamline communication, flow of information, and data sharing between insurers and reinsurers as an available and trusted repository of contract information. This becomes a faster, more efficient, and less-risky process as data related to loss records, asset ownership, or transaction histories is recorded on a blockchain that is trusted to be authentic and up-to-date. Access to this information can be heavily permissioned with granular access controls, with exhaustive rules governing read and write capabilities per user. Reinsurers can query a blockchain to retrieve updated, real-time, and trusted information rather than rely on a centralized insurance institution to report on data relevant to items (i.e. losses or transfer of ownership). This can massively expedite underwriting times.
Well, it turns out the lock broadcast its own Bluetooth MAC address over the airwaves, and uses that MAC address to calculate a key used to lock and unlock the device. Tierney cracked the system disturbingly quickly: "It upper cases the BLE MAC address and takes an MD5 hash. The 0-7 characters are key1, and the 16-23 are the serial number." The upshot? He was able to write a script, port it to an Android app, and open any nearby Tapplock wirelessly using his phone and Bluetooth, taking less than two seconds each time. "This level of security is completely unacceptable," he complained. "Consumers deserve better, and treating your customers like this is hugely disrespectful. To be honest, I am lost for words." The problem was so bad that Tierney informed the manufacturer, and gave it seven days before he went public with the fundamental flaw. Just hours before the deadline was up, Tapplock put out a security advisory warning that everyone needed to upgrade their lock's firmware "to get the latest protection." "This patch addresses several Bluetooth/communication vulnerabilities that may allow unauthorised users to illegal gain access," the company noted.
Technology that unthinkingly tramples over moral boundaries risks public rejection. Hence, researchers are openly discussing the ethical challenges likely to arise. Almost no one thinks a single cell is conscious, and today's organoids aren't either, but there's a continuous arc of increasing complexity that technology looks certain to traverse on the way to fully realistic human brains. What if a cherry-sized organoid of 10 million neural cells gains awareness of itself, or shows signs of distress? At what point does it become clear that organoids have crossed the boundary into beings deserving of rights, or warranting the appointment of a legal guardian? Right now, no one even knows how to reliably measure attributes of consciousness or thought in a piece of neural matter. We can do so in real brains, but what about things that are only partially like brains? Things may get weirder still with bits of artificial brain tissue implanted into the brains of other organisms, resulting in chimeras – organisms not fully of any one species, but part human and part mouse, pig or dog. Like AI based on computing, this research is racing ahead at alarming speed.
In a monolithic architecture, there can often be a single point of failure that could bring down an entire operation. In a microservices architecture, application components operate in isolation from one another, which means a security breach will not immediately affect the entire stack. Despite this architectural trait, you can still expect to face several complex security challenges. One challenge is that there is simply more attack surface to target. It's hard to keep an eye on everything within your stack when your application is made up of dozens of different microservices. A microservices-based app could use 10 containers, which translates to 10 times the number of instances to monitor. This challenge multiplies if those containers are regularly shut down and resurrected. The second issue involves the blurred perimeter of a microservices architecture. Unlike the clear-cut security perimeter that a firewall provides a monolithic app, there is no such definitive boundary with cloud-based microservices apps.
Organizations may feel more confident about confronting the types of attacks that have become familiar in recent years, but they still lack the capability to deal with more-advanced, targeted assaults. Overall, 68% of respondents have some form of formal incident response capability, but only 8% describe their plan as robust and spanning third parties and law enforcement. To improve their chances of fighting back against cyberattackers, organizations will have to overcome the barriers that currently make it more difficult for cybersecurity operations to add value. For example, 59% of GISS respondents cite budget constraints, while a similar number lament a lack of skilled resources; 29% complain about a lack of executive awareness or support. The so-called disconnect between cybersecurity and the C-suite still persists, with a mere 36% of corporate boards having sufficient cybersecurity knowledge for effective oversights of risks, as highlighted in the EY report. Ultimately, organizations that fail to obtain executive support and devote the resources necessary for adequate cybersecurity will find it very difficult to manage the risks they face.
The McAfee report details a 2017 cryptocurrency phishing scam in which a cybercriminal set up a fraudulent cryptocurrency “wallet” service. After collecting authentication information from the service’s users over the course of six months, the thief drained $4 million from unsuspecting customers’ accounts. Researchers provide examples of how cybercriminals using malware have been empowered by the proliferation of cryptocurrencies. The explosion of ransomware over the last few years has become operationally possible in large part due to the use of cryptocurrencies, which cloak cybercriminals’ identities associated with ransom payment transfers. The research illustrates the growing trends of malicious miners and cryptojacking, which create a vector for infection (via malware) and monetization (via mining). Recent McAfee Labs research in this cybercrime category found that total coin miner malware grew a stunning 629% in Q1 2018—from around 400,000 samples in Q4 2017 to more than 2.9 million samples in the first quarter of this year.
Quote for the day:
"Leadership is not about titles, positions or flowcharts. It is about one life influencing another." -- John C. Maxwell