Daily Tech Digest - July 11, 2018

Georgia Tech report outlines the future of smart cities

Georgia Tech report outlines the future of smart cities
One key point researchers made is that IoT deployed in public spaces – in collaboration between city governments, private enterprise and citizens themselves – has a diverse group of stakeholders to answer to. Citizens require transparency and rigorous security and privacy protections, in order to be assured that they can use the technology safely and have a clear understanding of the way their information can be used by the system. The research also drilled down into several specific use cases for smart city IoT, most of which revolve around engaging more directly with citizens. Municipal services management offerings, which allow residents to communicate directly with the city about their waste management or utility needs, were high on the list of potential use cases, along with management technology for the utilities themselves, letting cities manage the electrical grid and water system in a more centralized way. Public safety was another key use case – for example, the idea of using IoT sensors to provide more accurate information to first responders in case of emergency.

10 Tips for Managing Cloud Costs

Part of the reason why cost management is so challenging is because organizations are spending a lot of money on public cloud services. More than half of enterprises (52%) told RightScale that they spend more than $1.2 million per year on clouds services, and more than a quarter (26%) spend over $6 million. That spending will likely be much higher next year, as 71% of enterprises plan to increase cloud spending by at least 20%, while 20% expect to double their current cloud expenditures. Given those numbers, it's unsurprising that Gartner is forecasting that worldwide public cloud spending will "grow 21.4% in 2018 to total $186.4 billion, up from $153.5 billion in 2017." Another problem that contributes to cloud cost management challenges is the difficulty organizations have tracking and forecasting usage. The survey conducted for the SoftwareONE Managing and Understanding On-Premises and Cloud Spend report found that unpredictable budget costs was one of the biggest cloud management pain points for 37% of respondents, while 30% had difficulty with lack of transparency and visibility.

How to Receive a Clean SOC 2 Report

How to Receive a Clean SOC Report
Having a documented control matrix will be beneficial for more than just compliance initiatives; it becomes your source for how risk controls are developed and implemented and can be useful for augmenting corporate information security policies. For SOC 2, the control matrix becomes an important reference document for auditors. For instance, Trust Services Criteria 4 relate to monitoring of controls, so creating a list of how your organization is confirming controls are well designed and operating effectively makes it easy for auditors to validate that your stated controls are in place, designed to meet your security and confidentiality commitments, and are effective in doing so. Here is a concrete example: A control in your environment says servers need to be hardened to CIS benchmarks. How are you evaluating the effectiveness of this control? Are the servers hardened to your specification before going into production? Are they meeting benchmarks on an ongoing basis? An easy way to meet the monitoring requirement is to use a tool like Tripwire Enterprise.

Most Enterprise of Things initiatives are a waste of money

Most Enterprise of Things initiatives are a waste of money
What’s truly needed is a consolidated ability to capture and process all of the data and convert it into meaningful insights. Many companies provide analytics engines to do this (e.g., SAP, Google, Oracle, Microsoft, IBM, etc.). But to have truly meaningful company-wide analysis, a significantly more robust solution is needed than stand-alone, singular instances of business intelligence/analytics. How should companies enable the full benefits of EoT? They need a strategy that provides truly meaningful “actionable intelligence” from all of the various data sources, not just the 15 to 25 percent that is currently analyzed. That data must be integrated into a consolidated (although it may be distributed) data analysis engine that ties closely into corporate backend systems, such as ERP, sales and order processing, service management, etc. It’s only through a tightly integrated approach that the maximum benefits of EoT can be accomplished. Many current back-office vendors are attempting to make it easier for companies to accomplish this. Indeed, SAP is building a platform to integrate EoT data into its core ERP offerings with its Leonardo initiative.

Randy Shoup Discusses High Performing Teams

It is estimated that the intelligence produced by Bletchley Park, code-named "Ultra", ended the war two years early, and saved 14 million lives. ... Although the Bletchley Park work fell under the domain of the military, there was very little hierarchy, and the organisational style was open. The decryption was conducted using a pipeline approach, with separate "huts" (physical buildings on the campus) performing each stage of intercept, decryption, cataloguing and analysis, and dissemination. There was deep cross-functional collaboration within a hut, but extreme secrecy between each of them. There was a constant need for iteration and refinement of techniques to respond to newer Enigma machines and procedures, and even though the work was conducted under an environment of constant pressure the code-breakers were encouraged to take two-week research sabbaticals to improve methods and procedures. There was also a log book for anyone to propose improvements, and potential improvements were discussed every two weeks.

Intuit's CDO talks complex AI project to improve finances

The most obvious is a chat bot, but it could also provide augmented intelligence for our customer care representative; it could provide augmented intelligence for our accountants who are working on Intuit's behalf or private accountants who are using Intuit software. It could be deployed in internal processes where product teams learn how people interact with our software through a set of focus groups. So, it's one technology that could be instantiated across many different platforms and touchpoints. That's one of the exciting aspects from a technology perspective. If you think about how a human works, there are so many things that are amazing about humans, but one is that they have the ability to rapidly change contexts and rapidly deal with a changing environment. The touchpoint doesn't matter. It doesn't matter if you're talking on video, on the phone or in person. Generally speaking, people can deal with these channels of communication very easily. But it's hard for technology to do that. Technology tends to be built for a specific channel and optimized for that channel.

Ethereum is Built for Software Developers

In part, this is all thanks to what Ethereum has accomplished in a very short period. We give too much credit to Bitcoin’s price that skyrocketed approaching $20,000 in December, 2017, but the reality is in the code, and Ethereum is now what all dApp platforms compare themselves with, not the decade old Bitcoin model. As Ethereum solves the scalability problem, it will effectively untether itself from Bitcoin’s speculative price volatility. If Bitcoin is a bet, Ethereum is a sure thing. The main reason that is is because of the developer community it has attracted and the wide range of startups that use it especially in the early phases of their development. As TRON might find out, once they go independent they may have a more difficult time attracting software developers. Ethereum must take the piggy-back of ICOs of 2017 and be the open-source public distributed world operating system it was designed to be. It has massive potential to fill and in a crypto vacuum of hype and declining prices, Ethereum is perhaps the last chance before 2020, as the Chinese blockchains take over. The window is disappearing guys.

Software Flaws: Why Is Patching So Hard?

Software Flaws: Why Is Patching So Hard?
"As OCR states, identifying all vulnerabilities in software is not an easy process, particularly for the end user or consumer," says Mac McMillan, CEO of security consultancy CynergisTek. Among the most difficult vulnerabilities to identify and patch in healthcare environments "are those associated with software or devices of a clinical nature being used directly with patients," he says. "There are many issues that make this a challenge, including operational factors like having to take the system off line or out of production long enough to address security. Hospitals don't typically stop operations because a patch comes out. The more difficult problems are ones associated with the vulnerability in the software code itself, where a patch will not work, but a rewrite is necessary. When that occurs, the consumer is usually at a disadvantage." Fricke says applications that a vendor has not bothered to keep current are the trickiest to patch. "Some vendors may require the use of outdated operating systems or web browsers because their software has not been updated to be compatible with newer versions of operating systems or web browsers," he says.

5 security strategies that can cripple an organization

Security teams today have a two-faceted information problem: siloed data and a lack of knowledge. The first issue stems from the fact that many companies are only protecting a small percentage of their applications and, therefore, have a siloed view of the attacks coming their way. Most organizations prioritize sensitive, highly critical applications at the cost of lower tier apps, but hackers are increasingly targeting the latter and exploiting them for reconnaissance and often much more. It’s amazing how exposed many companies are via relatively innocuous tier 2 and legacy applications. The second, and more significant issue, can be summarized simply as, “you don’t know what you don’t know.” IT has visibility into straightforward metrics, but it often lacks insight into the sophistication of attempted breaches, how their risk compares to peers and the broader marketplace, and other trends and key details about incoming attack traffic. With visibility to only a small percentage of the attack surface, it’s very difficult to know whether the company is being targeted and exploited. Given the resource challenges noted above, it’s unrealistic to attempt to solve this problem with manpower alone.

What's the future of server virtualization?

Prior to server virtualization, enterprises dealt with server sprawl, with underutilized compute power, with soaring energy bills, with manual processes and with general inefficiency and inflexibility in their data-center environments. Server virtualization changed all that and has been widely adopted. In fact, it’s hard to find an enterprise today that isn’t already running most of its workloads in a VM environment. But, as we know, no technology is immune to being knocked off its perch by the next big thing. In the case of server virtualization, the next big thing is going small. Server virtualization took a physical device and sliced it up, allowing multiple operating systems and multiple full-blown applications to draw on the underlying compute power.  In the next wave of computing, developers are slicing applications into smaller microservices which run in lightweight containers, and also experimenting with serverless computing (also known as function-as-a-service (FaaS). In both of these scenarios, the VM is bypassed altogether and code runs on bare metal.

Quote for the day:

"The simple things are also the most extraordinary things, and only the wise can see them." -- Paulo Coelho

No comments:

Post a Comment