Daily Tech Digest - June 18, 2018

The right kind of AI: why intelligence must be augmented, not artificial

null
Einstein is a layer within the Salesforce platform that helps users to make best use of their data, delivering insight that allows them to truly focus on their customers. It does so by utilising the computing power of AI, a technology at the heart of everything Salesforce is trying to achieve.  Like many observers, Salesforce believes that AI is set to be the dominant technology of the next decade and that understanding customers is best achieved by AI. It allows users to address a number challenges, such as: learning from the data coming into the organisation; improving sales engagement; being proactive in customer service problem solving; becoming more predictive in addressing issues before they become a real problem.  Salesforce Ventures has announced a new $50 million fund to encourage startups to build AI-fuelled applications on top of Salesforce. This overall change of focus is reflected in the apps that are proving most popular within AppExchange.  AI’s ability to automate certain tasks and augment any number of others, and to bring enormous insight based on big data is behind this rise in AI-based apps.


Effective application security takes a team effort

When it comes to application security, the DevOps team have the hardest job of all. Actionable vulnerability data is rarely available during actual development cycles, meaning many security flaws only surface once an application has already gone live. Furthermore, due to time constraints imposed by senior leadership, DevOps teams are often confined to conducting security assessments at the last minute, just prior to release, which is far too late in the day to be effective. DevOps teams need to work closely with security professionals and senior leadership to build security into the entire development lifecycle. Moving to a continuous integration process can help with this, as can the use of both dynamic scanning and source scanning throughout the development and implementation phases. It’s also the role of DevOps to demonstrate to senior leadership that a slightly longer development phase is far more preferable to repeating the entire process multiple times due to vulnerabilities only being discovered after release. However, this is only possible if both DevOps and security professionals can communicate effectively up the chain of command, without fear.


Myth-buster: the real costs of Public and Private Cloud

Myth-buster: the real costs of Public and Private Cloud
Private cloud infrastructure is generally perceived as costly due to the consultative element and ongoing management costs. Despite public cloud seeming the far more cost-effective option on the surface, there are some hidden costs attached. For example, there is a hidden charge attached to the cost of moving data traffic between the various physical and virtual machines used by the public cloud. Public cloud providers generally charge an additional 20% on top of the fees charged by the platform providers themselves. Another hidden cost of public cloud is the background management and maintenance services, which are of course necessary for any type of cloud infrastructure. Finally, the question of reversibility is key. When a firm migrates its infrastructure from one cloud to another or to an internal architecture, the costs involved are often underestimated. Once all operational developments have been tailored specifically for a public cloud, the cost of migrating away from that can quickly become expensive! And that’s not even including the migration costs involved when transferring data to an external platform, which can also be high.


RegTech: The future of market abuse surveillance

Despite the risks, using independent solutions for different forms of data is currently the norm. In fact, 70% of respondents to a PwC market abuse surveillance survey2 are using three or more software vendors to execute their surveillance requirements and 75% are unable to review trade alerts alongside contemporaneous electronic communications or voice alerts. Further, alerts generated by multiple systems are typically reviewed manually by separate compliance teams that do not have easy access to each other’s information. Such dispersion impedes firms from having a true 360° view of employee behavior and limits the ability to stay ahead of emerging risks. Adding to the problem, data volumes and sources have also been increasing as the methods that traders use to communicate on a daily basis – from cell phones to chat apps to social media – continue to diversify. Communications surveillance also typically uses lexicon-based search techniques, which tend to produce high volumes of false positives and potentially miss true suspicious behavior. Finally, there are challenges associated with high volumes of false positives,3 some stemming from legacy systems and scenarios, which may not be calibrated with the current business landscape and risks.


Turn on the lights! The key challenges for businesses transformation

tiny figures move toward a giant lit bulb amid a line of burned-out lightbulbs
Waste can simply be defined in terms of its opposite – value. While costs are inevitable for all businesses, waste is optional. When a company is creating value and nothing but value, there is no waste. However, no system is that efficient and there will always be some waste which is uneconomical to remove. But a large percentage of organisational waste, anything from 50% to 70% - based on most studies- when removed, provides a healthy return on investment while contributing to competitiveness. And this is why we turn the lights on. And while those lights are on, it is important that all can see the mess for what it is. It will almost certainly be a lot bigger than anyone had imagined, and leaders need to be prepared for that. They also need to be prepared to forgive, because if they don’t, the waste will simply go back underground. Keeping the lights on means focusing not on the people, and not on the waste, but rather on the causes of the waste. If time and resources are spent only on cleaning up the mess, things will get dirty again very soon. The endgame here is to understand and deal with the institutional practices and structures that are endemic to the creation of institutionalised waste.


The digital transformation and the importance of humans

Change always creates stress and uncertainty for us as human beings. In my day-to-day work at Siemens, I often notice, however, that many people are generally open to change as such. In fact, employees often want things to change. As a rule, the difficulties arise as soon as they have to try out new things and implement concrete changes themselves. Then I often hear statements like: “I don’t even know where to begin.” Or: “I have so much to do and no time for anything else.” And that’s exactly where the problem is: we have to understand that change isn’t “deferrable,” let alone a phase that ends at some point. We can’t cut ourselves off from new developments, nor can we reduce the speed at which changes occur. To keep pace, we’ve got to adapt and move faster – as people, as a company and as a society. We’ve got to be open for new things and leverage digitalization and its opportunities in such a way that they help us increase the quality of life and benefit companies as a whole. To accomplish this goal, we have to do some things differently than we have in the past. And this shift can’t happen without a culture change.


Cisco makes SD-WAN integration a top priority

Cisco makes SD-WAN integration a top priority
“The branch is a very complicated part of the network when you think about it because when you think about a global business where you’ve got all the different types of interconnect you have around the world and you’re trying to manage all that. That part of the network is going to a software-defined WAN, and it’s an area we’ve been investing in heavily,” said David Goeckeler, executive vice president and general manager of networking and security at Cisco, in an interview with Network World.  “We had an iWAN solution. We have an SD-WAN solution from Meraki, and then we purchased Viptela because they had innovated on the cloud side of it, and we wanted to marry that up with the [Integrated Services Router] franchise that we had in iWAN, and we are well down the path of that integration. And I think we’re seeing big projects move forward now in the SD-WAN space. It’s a market that had been kind of stalled because I think customers were trying to figure out what to do,” he said. Other Cisco executives reiterated the importance of getting Viptela further integrated into the company’s networking portfolio. “One of the important parts of what Viptela brings is an easy way to implement really strong end-to-end segmentation that lets users build and secure different segments of their networks,” said Scott Harrell


Does Cyber Insurance Make Us More (Or Less) Secure?

insurance policy
Cyber insurance policies can be complex and tricky to understand, and anxious C-suite executives are buying cyber insurance often without understanding the full extent of what policies cover and what they don't. To grow the market and diversify the risk, insurance companies are taking on all comers, often with no adequate measure of the true risk any given insured enterprise faces. Both insurance carriers and enterprise buyers of cyber insurance are groping their way forward in the dark, a potentially dangerous scenario. Most insurance carriers, however, are aware of this blind spot, and researching how to better measure and quantify cyber risk. Measuring cyber risk is very different than in other domains. If you want to rate the risk of an earthquake or a hurricane, the actuarial science is sound. A data center in a hundred-year flood plain can expect a catastrophic flood once in a hundred years. Cyber risk, on the other hand, remains far harder to quantify — a problem, it must be noted, the insurance business is working hard to solve.


How to know when data is 'right' for its purpose

There are certainly scenarios where IT can answer the “right data” question with a confident yes or no and with only the most minor qualification. That is with metrics and calculations because there is always a right answer when math is involved. The qualification would be that IT has the correct definition and of course the underlying data has been populated consistently. Another option of working through this challenge is to clarify the expectation of the business user. Asking a few more questions to ascertain the true need and the reason behind the question can help frame the answer tremendously. Is the question based on previous instances of “bad” data? Again, “bad” data is relative and is always from the perspective of the business user. If so, then framing the response to highlight improvements in the consistency and validation of the source data may reassure and meet the users’ needs. Maybe the question is related to reference data that had not previously been governed or monitored. If so, walking through the steps taken to evaluate validity against a set of expected results will start building the confidence in the final product.


Default Interface Methods in C# 8


The main benefit that default methods bring is that now it is possible to add a new default method to an existing interface without breaking the classes that implement that interface. In other words, this feature makes it optional for implementers to override the method or not. An excellent scenario for this feature is the logging example that is described below; the ILogger interface has one abstract WriteLogCore method. All of the other methods, like WriteError and WriteInformation, are default methods that call the WriteLogCore method with a different configuration. The ILogger implementer has to implement only the WriteLogCore method. Think of how many lines of code that you have saved in each inherited class of the logger type. While this feature can be a great thing, there are some dangers as it is a form of multiple inheritances. Hence it suffers from the Diamond Problem, which is described below. Also, the interface methods must be "pure behavior" without the state; that means the interfaces, still, as in the past, cannot directly reference a field.



Quote for the day:


"If you're not failing once in a while, it probably means you're not stretching yourself." -- Lewis Pugh


No comments:

Post a Comment