Calculating the Costs of a Cyber Breach: Becoming the “Antifragile” Cyber Organization
We like the antifragile concept for two main reasons. First, when it comes to cybersecurity, what concerns people like us are these low-probability/high-impact events, sometimes called “fat-tail” events, that are difficult to account for and even harder to predict. Sure, we can say that a spear-phishing campaign could be catastrophic, but identifying which spear-phishing campaign will be the straw that broke the camel’s back is a whole lot harder if not impossible. Second, we like the antifragile concept because it is not only about resisting the breach, but rather, it is also about learning from the breach attempt. We like that, and that’s where we would like all organizations to be when it comes to their cyber posture. (Note: we are giving you the super oversimplified version of the antifragile concept.) So, if we want to become an “antifragile cyber organization,” where do our concerns lay? Actually, it is not so much with the technical capabilities.
MADIoT – The nightmare after XMAS (and Meltdown, and Spectre)
Now, there is a much larger underlying issue. Yes, software bugs happen, hardware bugs happen. The first are usually fixed by patching the software; in most cases the latter are fixed by updating the firmware. However, that is not possible with these two vulnerabilities as they are caused by a design flaw in the hardware architecture, only fixable by replacing the actual hardware. Luckily, with cooperation between the suppliers of modern operating systems and the hardware vendors responsible for the affected CPUs, the Operating Systems can be patched, and complemented if necessary with additional firmware updates for the hardware. Additional defensive layers preventing malicious code from exploiting the holes – or at least making it much harder – are an “easy” way to make your desktop, laptop, tablet and smartphone devices (more) secure. Sometimes this happens at the penalty of a slowdown in device performance
Forget bitcoin. Here come the blockchain ETFs
"Investors have been buying blindly, and there has been some abuse," said Christian Magoon, CEO of Amplify ETFs. "The SEC has to protect investors." But make no mistake. These two funds are set up to take advantage of the growing interest in blockchain. This is not the Winklevoss Bitcoin Trust, a fund that only owns bitcoin and is run by Cameron and Tyler, of Facebook and "The Social Network" movie fame. The Winklevii want to launch an ETF with the ticker symbol COIN, but the SEC has yet to approve it. In fact, the SEC seems unlikely to greenlight any funds that just want to invest in cryptocurrencies. Dalia Blass, director of the SEC's Division of Investment Management, wrote in a letter Thursday that it had many questions about these funds. And she said that until they are addressed, "we do not believe that it is appropriate for fund sponsors to initiate registration of funds that intend to invest substantially in cryptocurrency and related products."
Applying Quantum Physics to Data Security Matters Now and in the Future
As one of the few companies leveraging the power of quantum physics for random number generation, we also offer advanced key and policy management features that give customers complete control over the lifecycle and use of encryption keys. “Encryption and key management is complicated enough, and the injection of quantum mechanics into the discussion is enough to make most folks’ heads spin,” notes co-authors Garrett Becker and Patrick Daly in the report. “By combining the added security of encryption keys based on quantum random numbers, advanced key lifecycle management and an HSM for protecting those keys, QuintessenceLabs has developed a compelling offering for those enterprises and agencies with a need for the highest level of data security.”
How long will patient live? Deep Learning takes on predictions
The team described their work in their paper, "Improving Palliative Care with Deep Learning," which is up arXiv. The paper was submitted in November. The authors are Anand Avati, Kenneth Jung, Stephanie Harman, Lance Downing, Andrew Ng and Nigam Shah. Authors' Stanford affiliations ranged from Department of Computer Science, the Center for Biomedical Informatics Research, Department of Medicine and Stanford University School of Medicine. The algorithm was not developed to replace doctors but rather to provide a tool to improve the accuracy of prognoses. As Jeremy Hsu, IEEE Spectrum, wrote, "as a benign opportunity to help prompt physicians and patients to have necessary end-of-life conversations earlier." One can think of it as a triage tool for improving access to palliative care, one of the authors, Stephanie Harman, clinical associate professor of medicine at Stanford University and a co-author of the new study, told Gizmodo.
New Security Architecture Practitioner’s Initiative
The Security Architecture Practitioner’s Initiative is a joint effort of The Open Group Security Forum (a global thought leader in Enterprise Architecture) and The SABSA Institute to articulate in a clear, approachable way the characteristics of a highly-qualified Security Architect. The focus of this initiative is on the practitioner, the person who fills the role of the Security Architect, and on the skills and experience that make them great. This project is not about security architecture as a discipline, nor about a methodology for security architecture but rather about people and what makes them great Security Architects. The project team consists of pioneering Security Architects drawn from both The Open Group Security Forum and The SABSA Institute who have between them many decades of security architecture experience at organizations such as Boeing, IBM, HP, and NASA. Operating under the auspices of The Open Group and in collaboration with The SABSA Institute
Public Blockchain's Lure Will Become Irresistible for Enterprises in 2018
Central banks are already experimenting with the tokenization of their own currencies, but doing so in private, permissioned or proprietary blockchains that are managed by the central banks. It is a good start, but the next logical step is to create the legal and regulatory framework that enables the tokenization of fiat currency on any industrial or public blockchain. Once a closed-loop tokenized industrial blockchain exists, many of the key foundations of specialized blockchains would become add-on features in the true economic blockchain. Trade finance is easy if you trust that the representation of 1,000 phones, each worth $1,000 is accurate — you can loan money against those tokens in the blockchain. Similarly, customs declarations, tax calculations, and product history and provenance are all easily derived from looking at the history of the tokens in that blockchain. No separate blockchain is required for trade finance, payments or product traceability.
World Wide Data Wrestling
As on date we see data almost every single person, company or any entity is just running after data. By combining all this disparate data, predictive analytics can create highly accurate models to predict pollution trends in advance allowing civic agencies to make relevant predictions and changes to prevent spikes and keep pollution levels in check. Big data and analytics can also help improve traffic management in addition to just monitoring pollution levels and the fates of artificial intelligence and big data are intertwined ... Organizations collect data from a variety of sources, including business transactions, social media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a problem – but new technologies (such as Hadoop) have eased the burden. A comprehensive and widespread network such as this to track the causes of pollution at source will allow government agencies to create smarter strategies to combat pollution
Don’t be fooled: AI-powered tech still needs to prove its intelligence
It’s essentially a definitional problem: For some reason, the industry is hellbent on using AI when what is actually means is machine learning (ML). This is a much more narrow term, referring to what is essentially using trial and error to build a model that’s capable of guessing the answers to discrete questions very accurately. For example, take image recognition: say you want to build a system that separates pictures of cats from pictures of dogs. All you have to do is feed a ML algorithm enough pictures of cats, telling the system they are cats, and then enough pictures of dogs, telling it they are dogs. It will then build a model of what patterns to look for and eventually, after enough training, you should be able to feed it an unlabelled image, and it will be able to make a fairly accurate guess as to which of the two animals is in the picture.
What Aspiring Data Scientists Are Looking For in Hiring Companies
Not long ago, big data was the exclusive territory of the most prominent IT brands. There weren’t as many experts in the field 20 or even 10 years ago, and many small companies functioned perfectly fine without using big data. But this isn’t the case anymore. Even the smallest startup companies and entrepreneurial ventures now rely on big data to execute their business models, locate specific demographics and ensure long-term success. Businesses leave almost nothing to chance anymore, and much of this transition is tied directly to the big data boom. Perhaps above all else, data scientists want to work with a company that offers job security and stability. Many professionals in the niche realize how few openings exist in the field, so they’ll be pursuing long-term assignments whenever possible. Companies that can accommodate this need and provide guaranteed work will likely find it easier to fill roles in big data management, as opposed to those that only need to complete a one-time project.
Quote for the day:
"A leader does not deserve the name unless he is willing occasionally to stand alone." -- Henry A. Kissinger
No comments:
Post a Comment