Daily Tech Digest - June 17, 2019

300+ Terrifying Cybercrime and Cybersecurity Statistics & Trends


With global cybercrime damages predicted to cost up to $6 trillion annually by 2021, not getting caught in the landslide is a matter of taking in the right information and acting on it quickly. We collected and organized over 100 up-to-date cybercrime statistics that highlight: The magnitude of cybercrime operations and impact; The attack tactics bad actors used most frequently in the past year
How user behavior is changing and how it… isn’t; What cybersecurity professionals are doing to counteract these threats; How different countries fare in terms of fighting off blackhat hackers and other nation states; and  What can be done to keep data and assets safe from scams and attacks. Dig into these surprising (and sometimes mind-boggling) internet security statistics to understand what’s going on globally and discover how several countries fare in protecting themselves. The article includes a handy infographic you can browse to see how each stat is connected to the others, and plenty of visual representations of the most important facts and figures in information security today.


How Blockchain And AI Can Help Master Data Management

uncaptioned
Ensuring data security is vital, not only for ethical purposes but also for compliance with regulatory bodies. And no conversation about security and privacy, in this day and age, is complete without the mention of blockchain. Blockchain, which is often considered to be synonymous with privacy, can be used to secure sensitive information that makes up master data. This includes any personal information, such as that pertaining to customers and employees. It can also refer to accounting and banking-related information that may be necessary for processes like procurement and sales. All such information can be secured using blockchain through cryptographic hashing. Businesses can internally build enterprise blockchain networks to secure and manage master data using a decentralized model. It not only secures the information from illicit modification, but also from accidental loss due to physical damage to centralized servers. Additionally, it also helps in compliance with privacy regulations in an easily demonstrable manner. This is because data on a blockchain, in addition to being immutable, is also transparent and visible to all participants, ensuring smoother audits and checks.


Developing a Functional Data Governance Framework


Harvard Business Review reports 92 percent of executives say their Big Data and AI investments are accelerating, and 88 percent talk about a greater urgency to invest in Big Data and AI. In order for AI and machine learning to be successful, Data Governance must also be a success. Data Governance remains elusive to the 87 percent of businesses which, according to Gartner, have lower levels of Business Intelligence. Recent news has also suggested a need to improve Data Governance processes. Data breaches continue to affect customers and the impacts are quite broad, as an organization’s customers (including banks, universities, and pharmaceutical companies) must continually take stock and change their user names and passwords. Effective Data Governance is a fundamental component of data security processes. Data Governance has to drive improvements in business outcomes. “Implementing Data Governance poorly, with little connection or impact on business operations will just waste resources,” says Anthony Algmin, Principal at Algmin Data Leadership. To mature, Data Governance needs to be business-led and a continuous process, as Donna Burbank and Nigel Turner emphasize.


Survey: Data-center staffing shortage remains challenging

help wanted data center network room it shortage now hiring by yinyang getty
Contributing to the staffing crisis is a lack of workplace diversity. In particular, the Uptime Institute’s research highlights a significant gender imbalance: 25 percent of managers surveyed have no women among their design, build or operations staff, and another 54 percent have 10 percent or fewer women on staff. Only 5 percent of respondents said women represent 50 percent or more of staff. Yet most respondents don’t seem to think there’s anything deterring women from working where they work. A majority (85 percent) said it’s easy for women to pursue a career in their respective organization’s data center team or department; just 15 percent said it’s difficult. Referring to the data-center industry as a whole, respondents were less confident about women’s employment prospects: 53 percent said it’s easy for women to pursue a career in data centers, and 47 percent said it’s difficult. In the big picture, diversity issues could become a threat to business operations. “Study after study shows that a lack of diversity is not just a pipeline issue,” Ascierto said. 


How banks can use ecosystems to win in the SME market

How banks can use ecosystems to win in the SME market
In parallel to designing the prototype, banks need to think through IT implications at the outset. The design choices will significantly affect the speed of development and the potential reach of the new solution. A design based on integration with an existing banking app might command a larger audience than a new stand-alone application—yet the latter typically offers more flexibility. The choice of a platform should be wedded to the monetization approach (see the “Think early about monetization” section). If the bank wants to retain the option of spinning off an ecosystem platform in the future, or listing it separately, its IT should not be enmeshed with the bank’s legacy systems. Nor can it be completely divorced: efficient transfer of information between the two systems is needed to maximize value for both banking and nonbanking offerings. IT is a key driver of costs and of the ecosystem design and business model. For instance, a Western European bank decided to integrate its ecosystem solution with its mobile banking platform.


The New Addition to the Dell EMC Ready Solutions for AI Portfolio


The Deep Learning with Intel solution joins the growing portfolio of Dell EMC Ready Solutions for AI and was unveiled today at International Super Computing in Frankfurt. This integrated hardware and software solution is powered by Dell EMC PowerEdge servers, Dell EMC PowerSwitch networking, and scale-out Isilon NAS storage and leverages the newest AI capabilities of Intel’s 2nd Generation Intel® Xeon® Scalable processor microarchitecture, Nauta open source software and includes enterprise support. The solution empowers organizations to deliver on the combined needs of their data science and IT teams and leverages deep learning to fuel their competitiveness. Dell Technologies Consulting Services help customers implement and operationalize Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Once deployed, ProSupport experts provide comprehensive hardware and collaborative software support to help ensure optimal system performance and minimize downtime. 


5G in the UK — overhyped or has the next era of connectivity really begun?

5G in the UK — overhyped or has the next era of connectivity really begun? image
The availability of 5G is dependent on local operators (EE, O2, Vodafone etcetera) — businesses are relying on them to build it out and drive on the capabilities. These businesses will need connectivity across multiple networks and so while an operator race is developing, it’s important that every network is competitive. Unfortunately, this is not a priority for heated competitors. Over the last two months, operators have revealed their capabilities, but they’re very focused on their own networks. It’s unlikely, they will have even started to talk about how to make that available to other partners or asked how to support sharing across partners within the network, which, as is the case with other technology changes, typically has a second phase. “Typically, the first phase is to build out and scale up within their own networks; and then the next phase is asking how do you do interoperability and interworking between the networks,” said Sherwood. “That’s even further away from scale. ... ”


Could AI Enable The Idea Of 'Reverse Fact Checking'?

Getty Images.
Fact checking today is a reactive process in which journalists wait for a falsehood to begin spreading virally and then publish their final verdict long after the falsehood’s spread has tapered off and the damage done. Much of this delay stems from the amount of time and research it takes for fact checkers to investigate a claim and determine its veracity. What if we inverted this process and required every social media post to provide external attribution for its claims and used deep learning algorithms to compare the statements in the post to the original material it cites as its source? Could this “reverse fact checking” largely curb the spread of digital falsehoods? The greatest limitation of today’s fact checking landscape is the time and effort it takes fact checkers to investigate a claim. Collecting evidence, reaching out to organizations and experts for commentary and summarizing the resulting information into a final verdict is an extremely time-consuming process that offers few opportunities for efficient scaling.


Identity and access management –– mitigating password-related cyber security risks

Identity and access management –– mitigating password-related cyber security risks image
The death of the password has been heralded since the Hewlett Packard, in the mid 1990s, introduced biometric fingerprint scanning into laptops. But, it is still pervasive. Biometrics have become more common in personal devices and mobile devices, it’s true. But, there are still a range of applications out there that are hugely dependent on passwords as their primary method of authentication. In any enterprise or small business, there’s still a heavy reliance on passwords and often businesses don’t even know the extent to which applications are being used in the business. IT might know about the common apps that are used in that organisation, but they may have no visibility of these applications that some departments have adopted autonomously. To give an example, My1Login worked with one smaller organisation who thought it had about 40 applications in use across the business. When they switched My1Login’s solution on, the technology discovered there were actually 600 corporate applications being used. All of these are now integrated fully with a single sign-on.


Five Android and iOS UI Design Guidelines for React Native


In a multiplatform approach, the designer is bound by the guidelines for each platform. This approach is more useful if your application has a complex UI and your main goal is to attract users who are more likely to spend their time on their favorite platform, be it iOS or Android. Going by the above example of a search bar, an Android user is more likely to be comfortable with the look and feel of the standard search bar of an Android app. This contrasts with an iPhone user, who will not be comfortable with the standard Android search bar. So, in a multiplatform approach, you strive to give each user the kind of look and feel they are used to. Let’s have a look at a more realistic example in order to have a clearer picture of what the multi-platform approach entails: Airbnb. As you can see in the image below, the versions of the Airbnb app for iOS and Android look entirely different and the reason for that is they follow design guidelines which are totally platform-specific.



Quote for the day:



"Blessed are the people whose leaders can look destiny in the eye without flinching but also without attempting to play God "- Henry Kissinger


Daily Tech Digest - June 16, 2019

While We Wait For Artificial Superintelligence, Let's Make The Most Of Augmented Intelligence

uncaptioned
Augmented intelligence has displayed unmatched potential in multiple industry sectors such as healthcare, retail, finance, manufacturing and many more. Just about every organization is already deploying or planning to use augmented intelligence for various applications. The ability of augmented intelligence to improve human capabilities has proved to be fruitful in the workplace. With the help of augmented intelligence, employee performance, productivity, and experience can grow at a staggering rate. Organizations must exploit augmented intelligence to its maximum potential to gain the best possible results and maintain a competitive edge. Augmented intelligence can effectively improve workplace productivity by automating various tasks. Routine and admin tasks require a workforce and consume a significant chunk of employee time. Such tasks can be easily automated with the help of augmented intelligence. Augmented intelligence has given rise to advanced solutions such as Robotic Process Automation, (RPA), for various industry sectors.


Data Governance: From Risk Management to Business Value

When data governance was just oriented around compliance, the scope of data and the governance requirements were controlled and prescriptive. This narrow focus made it possible to use manual processes for governance and stewardship activities. In the new world of business value-based data governance the sheer scale of data, and the collaboration required across all organizational functions makes automation critical to success. We now have data lakes with petabytes of data, being updated in real time with streaming sensor data, social data, and mobile location data. There are tens of thousands of users accessing the data across finance, sales, marketing, service, procurement, research and development, manufacturing, logistics, and distribution. It’s at least a thousand-fold increase in scale and complexity. At this scale the only way you will keep up is with AI-powered automation.


The Danger of Bias in an Al Tech Based Society


The intelligence of AI systems is learned from humans. By nature, humans are biased. We will usually want our national team to win against a rival. We will always be rooting for our own family members to succeed. Even though we may not realize it, deep in our subconscious lies bias.  Algorithmic bias occurs when the AI system acts in a way that reflects the implicit values of the humans who were involved in the data collection, selection, and programming. Despite the presumed neutrality of the data, algorithms are open to bias. Algorithmic bias often goes undetected. Bias is hidden in the depths of the mathematical programming of AI tech and means that important decisions go unchecked. This could have serious negative consequences for poorer communities and minority groups. ... If algorithms could accurately predict which defendants are likely to re-offend, the system could be made more selective about sentencing and more just. However, this would mean that the algorithms would have to be devoid of any type of bias to avoid exacerbating unwarranted and unjust disparities that are already far too common in the criminal justice system.



Addressing the Top Three Risk Trends for Banks in 2019

Most of the cybersecurity risk for banks comes from application security. The more banks rely on technology, the greater the chance they face of a security breach. Adding to this, hackers continue to refine their techniques and skills, so banks need to continually update and improve their cybersecurity skills. This expectation falls to the bank board, but the way boards oversee cybersecurity continues to vary: Twenty-seven percent opt for a risk committee; 25 percent, a technology committee and 19 percent, the audit committee. Only 8 percent of respondents reported their board has a board-level cybersecurity committee; 20 percent address cybersecurity as a full board rather than delegating it to a committee. Utilizing technological tools to meet compliance standards—known as regtech—was another prevalent theme in this year’s survey. This is a big stress area for banks due to continually changing requirements. The previous report indicated that survey respondents saw increased expenses around regtech.


Capturing value in machinery and industrial automation as market dynamics change

Capturing value in machinery and industrial automation as market dynamics change
Currently, most established players—OEMs, automation-device suppliers, and machine-control suppliers—are working on strategies to cope with shifting growth patterns and the resulting mix of unexpected high demand and declining growth in more mature technologies. At the same time, these players are preparing themselves to be best positioned to claim a share of the additional value expected to be created by digital manufacturing solutions, which we estimate will double to €32 billion worldwide by 2025. The disruptive trend of digitization also attracts new players to participate in the market, especially in the space of software, platforms, and application providers. This diversification challenges the foothold that established players have enjoyed on strategic control points, for example, the machine-control layer in the automation technology stack. While the strategic cornerstones are often obvious and similar across players—for instance, securing core business, capturing additional value from digitization, and increasing internal efficiency—the exact chances of success of individual strategic measures and the threat from competition remains uncertain.



Microsoft’s Ann Johnson: ‘Identity is the new perimeter’

Identity is the new perimeter and we identify identity as the human, the device, the data, the application – and all of these have a unique identity and all of these need to be updated, hashed and healthy. In the context of ML, we take all of those variables and put them in the ML engine and assign risk based on where the user is, what they are trying to access, how they authenticate and what device they are on. What we find with bad actors is that we are not seeing yet, in any meaningful way, production of malware that adapts in the wild that you would expect, but potentially in the future. We are not seeing yet any meaningful corruption with AI models or putting malicious data into ML engines to try to train it incorrectly. I do expect that there will be attack vectors and we are doing a tremendous amount of work with Microsoft Research to make sure we build those defences. But the good news is that we are not seeing it in any meaningful or wholesale way today, and that’s why I don’t think it is quite a race.


Managing the multicloud will require lots of AI – but people too

Cloud Mobile Phone Typing Smartphone Phone Finger
The more complex your multicloud becomes, the less likely it is that you’ll be able to entirely automate responses the vast range of underlying platform, application, service and other issues. Human-in-the-loop exception handling will become the order of the day for the long tail of rare cloud-computing use cases up and down this multilayered management plane. The more complex cloud management functions — including cost management, security and compliance, application development, deployment and operational management — will continue to rely on collaborative responses that skilled human IT personnel may need to improvise on the fly. The orchestration layer in the more complex cloud deployment use cases will need to drive human-response flows alongside entirely system-automated responses. The less common a specific incident or situation is, the less likely it is that there will be sufficient historical “ground truth” data for training the highly predictive statistical models upon which AI-driven automations depend. In many multicloud operational circumstances, AI-driven workflows will often span several tiers of IT support resources working in lockstep over indefinite periods.


Microsoft Edge Reddit AMA: Edge might come to Linux

Microsoft Edge
The biggest tease the company dropped was its apparent willingness to release an Edge version for Linux -- a move that was once considered inconceivable. "We don't have any technical blockers to keep us from creating Linux binaries, and it's definitely something we'd like to do down the road," the Edge team said. "That being said, there is still work to make them 'customer ready' (installer, updaters, user sync, bug fixes, etc.) and something we are proud to give to you, so we aren't quite ready to commit to the work just yet. "Right now, we are super focused on bringing stable versions of Edge first to other versions of Windows, and then releasing our Beta channels," Edge devs said. While the Chromium codebase on which the upcoming Edge version supports Linux builds, users were afraid that when Microsoft ripped out various Chromium features last year, it might have impacted Edge's ability to support cross-platform builds. However, today's comment comes to confirm a tweet published in April on the personal Twitter account of one of Edge's developers.


For better healthcare claims management, think “digital first”

For better healthcare claims management, think “digital first”
In the long-term vision, digital solutions would cover all steps within claims management. Because the process would be fully digital, very little human intervention would be needed. In this scenario, claims would be transferred in real time from a provider to a cloud solution containing all electronic health documents. Once a claim is transferred to the cloud, self-learning algorithms would automatically access it and perform real-time auditing using technical reference points, such as the claimant’s insurance status and benefits package, as well as medical reference points. Once robust self-learning algorithms have been established and trained using both existing data and expert knowledge, their efficiency will continue to improve over time. Ultimately, it would become possible to automate payers’ communications with providers and customers. For example, if further information was required to reach a decision about a specific claim, providers would be contacted automatically via a digital request form that would include an integrated first check for basic information.



Foundations Of Business Architecture


The work of creating and defining a business architecture is not meant as an academic exercise. A business architecture is based on the organization’s business strategy. The business architecture positions the organization to operate efficiently in pursuit of its goals. As defined, a business venture is about creating value. Value is demonstrated in the form of corporate profits or in returns to owners and shareholders. Corporate goals tend to be high-level and wide. Organizations use various processes and methods for capturing and documenting the corporate goals. The method used in capturing the corporate goals is less important than having the discipline, structure, and communication methods to support the creation and dissemination of the corporate goals across the entire organization. Used most effectively, corporate goals are developed within the context of a larger enterprise wide strategic planning function. Often, the process is used in creating the organization’s data strategy, which may occur during enterprise architecture planning.



Quote for the day:


"The most valuable thing you can make is a mistake - you can't learn anything from being perfect." -- Adam Osborne


Daily Tech Digest - June 15, 2019

This is likely the No. 1 thing affecting your job performance


To improve your expertise, you must first identify gaps in your knowledge. You aren’t likely to be motivated to learn new things–nor can you be strategic about learning–if you’re not aware of what you do and don’t know. Without a good map of the existing state of your knowledge, you’ll bump into crucial new knowledge only by chance. ... The ability to know what you know and what you don’t know is called metacognition—that is, the process of thinking about your thinking. Your cognitive brain has a sophisticated ability to assess what you do and don’t know. You use several sources of information to make this judgment. Research by Roddy Roediger and Kathleen McDermott identified two significant sources of your judgments about whether you know something: memory and familiarity. If I ask you whether you’ve heard of Stephen Hawking, you start by trying to pull information about him from your memory. If you recall explicitly that he was a famous physicist or that he worked on black holes and had ALS, then you can judge that you’ve heard of him.



Fintech CEOs bullish on blockchain tech, give thumbs down on applications


While cryptocurrency received something of a reprieve, financial services executives this week expressed doubts about the current applications for blockchain and other distributed ledger technology. “There’s too much hype around blockchain,” said Rishi Khosla, CEO and co-founder of U.K.-based challenger bank OakNorth. “For the practicality of what’s actually been delivered so far, it is way underrated. I do believe that blockchain has a place in lending, especially when you think about sort of the whole ‘perfecting security process’. It just requires so much changing of the plumbing.” Still, some nodded favorably toward the technology’s potential impact on the industry. Securities and Exchange Commission commissioner Robert Jackson said blockchain technology can both shorten the time and lower the expense of clearing and settling trades. He also pointed to potential use cases for auditing, smart contracts and tracking and dealing with fraud.


Blockchain: A Boon for Cyber Security


Blockchain technology has impacted the cyber security industry in a few ways. The HYPR Corp is a New York based company that provides enterprises with decentralised authentication solutions, which enable consumers and employees to securely and seamlessly access mobile, Web and Internet of Things (IoT) applications. It uses blockchain technology to decentralise credentials and biometric data to facilitate risk based authentication. It invested US$ 10 million in 2018 on this platform. NuCypher is another blockchain security company which uses distributed blockchain systems proxy re-encryption. It also has an accessible control platform and uses public-key encryption to securely transfer data and enforce access requirements. Blockchain is one of the biggest tech buzzwords in the last few years, and the technology is being marketed as a cure for everything including cyber security. The US Ministry of Internal Affairs and Communications implemented a blockchain based system for processing government tenders in March 2018.



Sensory Overload: Filtering Out Cybersecurity's Noise

A good security process is extremely valuable. Regardless of the task at hand, process brings order to the chaos and minimizes the redundancy, inefficiency, and human error resulting from lack of process. On the other hand, a bad security process can have exactly the opposite effect. Processes should help and improve the security function. In order to do so, they need to be precise, accurate, and efficient. If they aren't, they should be improved by filtering out the noise and boiling them down to their essence. It's far too easy to get distracted by every new security fad that comes our way. Once in a while, an item du jour becomes something that needs to be on our radar. But most of the time, fads come and go and seldom improve our security posture. Worse, they can pull us away from the important activities that do. Many of us don't know exactly what logs and event data we will or will not need when crunch time comes. As a result, we collect everything we can get our hands on. We fill up our available storage, shortening retention and impeding performance, although we may never need 80% of what we're collecting.


The Next Big Privacy Hurdle? Teaching AI To Forget


The lack of debate on what data collection and analysis will mean for kids coming of age in an AI-driven world leaves us to imagine its implications for the future. Mistakes, accidents, teachable moments—this is how children learn in the physical world. But in the digital world, when every click, view, interaction, engagement, and purchase is recorded, collected, shared, and analyzed through the AI behemoth, can algorithms recognize a mistake and understand remorse? Or will bad behavior be compounded by algorithms that are nudging our every action and decision for their own purposes? What makes this even more serious is that the massive amount of data we’re feeding these algorithms has enabled them to make decisions experientially or intuitively like humans. This is a huge break from the past, in which computers would simply execute human-written instructions. Now, advanced AI systems can analyze the data they’ve internalized in order to arrive at a solution that humans may not even be able to understand—meaning that many AI systems have become “black boxes,” even to the developers who built them, and it may be impossible to reason about how an algorithm made or came to a certain decision.


How To Choose The Right Approach To Change Management

Our analysis shows that when you aggregate all the stages in the most popular OCM change models into a 10-stage process, none of them really cover all the bases. In fact, the analysis shows that it you choose one of these models you are likely to miss around 40 per cent of the steps suggested by other models. The analysis also shows that the biggest gap in popular change models is in ‘Assessing the Opportunity or Problem Motivating the Change’ – arguably the most critical step in OCM. ... So where do we turn when there is no real evidence to support popular change management models? Lewin did build an evidence base on a different approach to OCM. Rather than a planned approach to change, Lewin argues for a more emergent approach. He suggests that that groups or organisations are in a continual process of adaptation – there is no freezing or unfreezing. So, what are the critical success factors for creating an organisational culture that can purposefully adapt to changing environments whilst maintaining current operations?



In the drive to improve customer experience, Marketing needs to develop this single customer view, which will allow extremely targeted marketing. It does not help if copious social and historic shopping data is collated and used to build a customer persona if the customer's mobile number or email address was captured incorrectly. Likewise, duplicate records and "decayed" (out of date) data create annoyances both to the customer and to the marketing department. Much research has gone into why data is inaccurate, and the same answer is always found: it is due to human error. While human error can create the initial quality issue, for instance, when customer information is being loaded by one of the company's employees, benign neglect is also a contributor. Periodic reviews of whether customer contact details have changed are required, as well as scrupulous attention to returned emails and failed SMS messaging experienced during a marketing campaign. It is interesting to note that "Inadequate senior management support" is given as a challenge by 21% of the respondents.


How Do We Think About Transactions in (Cloud) Messaging Systems?

The baseline that we need to come from, is that everything's interconnected with everything else and users are going to expect to connect with their data and to collaborate with other users on any set of data in real time across the globe.  Messaging systems were introduced as a way of providing some element of reliable message passing over longer distances. Consider the scenario where you're transferring money from one account to another. There isn't the possibility, nor is there the desire, for any bank to lock records inside the databases of any other bank around the planet. So messaging was introduced as a temporary place that's not in your database or in my database. And then we can move the money around through these high highly reliable pipes. And each step of the journey can be a transaction: from my database to an outgoing queue, and from my outgoing queue to an intermediary queue, from one intermediary queue to another intermediary queue, from there to your incoming queue, and from your incoming queue to your database. As long each one of those steps was reliable and transactional, the whole process could be guaranteed to be safe from a business perspective.


Identity Is Not The New Cybersecurity Perimeter -- It's The Very Core

uncaptioned
It suggests that security perimeters are still effective in a cloud-native world -- and they most certainly are not. I often like to say, “If identity is the new perimeter, then Bob in accounting is the new Port 80.” In this new cloud-first world, all a hacker needs to do is get one person in an organization to click a link and it's game over. With the compromised employee’s credentials in hand, they can walk right through your defenses undetected and rob you blind. For true security in the cloud, identity needs to move to the very core of a company’s cybersecurity apparatus. That’s because when there is no more perimeter, only identity can serve as the primary control for security. As advocates of zero trust security (myself included) advise, “Don’t trust, verify.” How do you do it? Making the transition to a security model that places identity at the center involves a cultural shift that spans a company’s people, processes and technology. Here are key insights on how to get started, based on 15 years of experience helping companies turn the corner on identity-based security



Developing and Managing Change Strategies with Enterprise Architecture

The reality of most enterprises with IT portfolios consisting of > 100 IT applications is that a combination of each replacement option is technically feasible and, given the right approach, perhaps even cost-effective. And by using LeanIX, Enterprise Architects and their stakeholders can leverage collaborative mechanisms and live data to quickly evaluate technologies to see which mixture of SQL Server 2008/2008 R2 alternatives match specific business strategies and then govern the transformation projects thereafter. By linking Business Capabilities to applications, and linking those applications to technology components like SQL Server, Enterprise Architects can review Business Capability maps as seen within LeanIX Reports like the Application Matrix to align improvements with essential organizational processes. In particular, alongside a series of configurable views like “Technology Risk” and “Lifecycle”, an Application Matrix Report shows Business Capabilities and their supporting technologies across geographical user groups to help Enterprise Architects base decisions on overlapping business needs.



Quote for the day:


"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman


Daily Tech Digest - June 14, 2019

How a service mesh helps manage distributed microservices

How a service mesh helps manage distributed microservices
What makes a service mesh unique is that it is built to accommodate the unique nature of distributed microservice environments. In a large-scale application built from microservices, there might be multiple instances of any given service, running across various local or cloud servers. All of these moving parts obviously makes it difficult for individual microservices to find the other services they need to communicate with. A service mesh automatically takes care of discovering and connecting services on a moment to moment basis so that both human developers and individual microservices don’t have to. Think of a service mesh as the equivalent of software-defined networking (SDN) for Level 7 of the OSI networking model. Just as SDN creates an abstraction layer so network admins don’t have to deal with physical network connections, a service mesh decouples the underlying infrastructure of the application from the abstract architecture that you interact with. The idea of a service mesh arose organically as developers began grappling with the problems of truly enormous distributed architectures. Linkerd, the first project in this area, was born as an offshoot of an internal project at Twitter.



How autonomous software testing could change QA


With conventional automation technology, testers have to invest considerable time into learning how to script each test scenario. On the other hand, with autonomous software testing, testers can spend more time training tools and contributing to QA management initiatives, said Theresa Lanowitz, co-founder and analyst at Voke. Autonomous testing frees testers to spend more time, for instance, helping the CIO or CEO tackle critical objectives around bringing AI into the organization to benefit the customer. And when autonomous tools mature, their capabilities will enable testers to spend more time exploring nonfunctional requirements of a project, such as performance and security. Once these tools fulfill their promise and have a proven track record, many software quality engineers will ditch test tools with scripted interfaces. "Capabilities [of traditional test tools] are going to be so far eclipsed by what these autonomous testing tools can do that you will leave that tool behind," she said.


How China’s first autonomous driving unicorn Momenta hunts for data

momenta
Momenta won’t make cars or hardware, Cao assured. Rather, it gives cars autonomous features by making their brains, or deep-learning capacities. It’s in effect a so-called Tier 2 supplier, akin to Intel’s Mobileye, that sells to Tier 1 suppliers who actually produce the automotive parts. It also sells directly to original equipment manufacturers that design cars, order parts from suppliers and assemble the final product. Under both circumstances, Momenta works with clients to specify the final piece of software. Momenta believes this asset-light approach would allow it to develop state-of-the-art driving tech. By selling software to car and parts makers, it not only brings in income but also sources mountains of data, including how and when humans intervene, to train its codes at relatively low costs. The company declined to share who its clients are but said they include top carmakers and Tier 1 suppliers in China and overseas. There won’t be many of them because a “partnership” in the auto sector demands deep, resource-intensive collaboration, so less is believed to be more. What we do know is Momenta counts Daimler AG as a backer. It’s also the first Chinese startup that the Mercedes-Benz parent had ever invested in, though Cao would not disclose whether Daimler is a client.



The Global Push to Advance AI

getty 84008188 7 rgb 1280x8001
While different nations often see matters of national policy in very different terms, there are times of nearly universal agreement. That’s the case today when it comes to commitments to fuel the advancement of artificial intelligence. Governments around the world agree on the importance of investing in AI initiatives. This point is underscored in a recent report by McKinsey Global Institute. The briefing notes that China and the United States are leaders in AI-related research activities and investments, followed by a second group of countries that includes Germany, Japan, Canada and the United Kingdom. Other countries that are on path to AI readiness include Belgium, Singapore, South Korea, Sweden, Brazil, India, Italy and Malaysia. There are lots of reasons for the focus on AI. One of them is economic growth. McKinsey says that its survey data suggests AI adoption could raise global GDP (gross domestic product) by as much as $13 trillion by 2030. This equates to about 1.2 percent additional GDP growth per year. Numbers like these suggest that nations have a lot to gain from AI investments.



5 Greentech Companies Pursuing Environmental Solutions

greentech
The Scottish company does this by using non-recyclable plastic waste to extend the bitumen used in road production. Not only does this give a new lease of life to plastic that would otherwise have been incinerated or ended up in landfill, it also reduces the amount of fossil fuels needed for road production, and results in a higher quality finished product. MacRebur’s roads can be found all over the UK, where the Department for Transport recently assigned £1.6m to extend the use of plastic roads in Cumbria. They have also begun operations in various countries around the world. ... North American based Recleim styles itself as a next generation recycling company. In partnership with Germany recycling technology company Adelmann Umwelt GmbH, it offers closed-loop recycling to businesses and organisations. This involves collecting materials to be recycled, processing them, and repurposing them to be used again. Recleim’s proprietary system includes a logistics operation to recover items from businesses and take them to their de-manufacturing plant, where they are cleanly and safely taken apart. This process recovers 95 per cent of components by weight from items such as refrigerators, other large appliances, and electronics.


When to use 5G, when to use Wi-Fi 6

abstract wireless communication network
Wi-Fi 6 and 5G are competitive with each other for specific situations in the enterprise environment that depend on location, application and device type. IT managers should carefully evaluate their current and emerging connectivity requirements. Wi-Fi will continue to dominate indoor environments and cellular wins for broad outdoor coverage. Some of the overlap cases occur in stadiums, hospitality and other large event spaces with many users competing for bandwidth. Government applications, including aspect of smart cities, can be applicable to both Wi-Fi and cellular. Health care facilities have many distributed medical devices and users that need connectivity. Large distributed manufacturing environments share similar characteristics. The emerging IoT deployments are perhaps the most interesting “competitive” environment with many overlapping use cases. While the wireless technologies enabling them are converging, Wi-Fi 6 and 5G are fundamentally distinct networks – both of which have their role in enterprise connectivity. Enterprise IT leaders should focus on how Wi-Fi and cellular can complement each other, with Wi-Fi continuing as the in-building technology to connect PCs and laptops, offload phone and tablet data, and for some IoT connectivity.


The 3 critical AI research questions


The most critical piece, he says, is that today, most AI systems are built and require pretty substantial investment in data science, requiring some heavy data scientists and engineering types to build the systems and deploy them for enterprise use. “If you want to extend AI to a wide swath of users what we need to get to over time — and it’s not going to happen overnight — is some semi-autonomous tools,” Gold explains. “The equivalent of a word processor or Powerpoint that brings it down to the user level instead of having to go out and buy 5,000 data scientists that you can’t get anyway.” In other words, a tool in which you can define a problem you want to go solve for, or want to get information on, which then goes out and builds the AI system, the learning system, the inference system that will allow you to do that. ... All the major chip players are adding an NNP (neural network processor) to their chips, Gold says, and the next question becomes how to best do that. There are a number of arguments about that as well. Some companies are focusing on the training side, and others are focusing on the inference side, which are two ways of optimizing the architecture. Ultimately, he says, you’ll need both.


Google Researcher Details Windows Cryptographic Library Bug

The problem, Ormandy writes, start within SymCrypt, which is the primary library for implementing symmetric cryptographic algorithms in Windows 8 and newer operating systems. These algorithms create a single, secret key that is used for both encryption and decryption. The bug essentially creates a never-ending loop within this cryptographic library, Ormandy says. "There's a bug in the SymCrypt multi-precision arithmetic routines that can cause an infinite loop when calculating the modular inverse on specific bit patterns with bcryptprimitives!SymCryptFdefModInvGeneric," Ormandy writes. As part of his research, Ormandy constructed a special X.509 certificate - a recognized public key infrastructure standard - that would trigger the bug by not allowing the system to complete the verification process. Because the certificate is embedded in a secure message or protocol, it can bypass security measures. If one systems triggers the flaw, it can go on to affect an entire fleet of Windows devices, he writes. In addition to a denial-of-service attack, this flaw could also force the Windows devices to reboot, the researcher says.


AIOps early adopters tackle data quality issues


AIOps can augment enterprise IT ops teams as they cope with ever-larger numbers of increasingly complex IT infrastructure components. But AIOps tools are only as good as the data they're given. The earliest days of AIOps stoked fear that advanced data analytics algorithms attached to automated machines will replace human IT experts, but those fears remain far-fetched at best. Early adopters say AIOps tools are far from a magic bullet, and IT ops jobs are safe, even as organizations use artificial intelligence and machine learning tools to sort through infrastructure monitoring data, reduce alert noise and, in some cases, investigate or resolve the causes of incidents. The effectiveness of AIOps software also remains limited by how solidly human IT pros build the data pipelines that feed it and how well human operators in IT and business interpret its results. "In many situations, we help customers realize they don't actually have the right data in place," said Amer Deeba, COO of Moogsoft, an AIOps software vendor in San Francisco.


Middle East has a big problem: It loves tech but can't stop blocking it

"Through their cybercrime laws, the GCC countries have sought to get a stronger grip on social media and to stymie the potential for spillover via online platforms of political unrest from other Arab countries," Hakmeh notes. Other countries are following suit. The Palestinian Authority blocked several news websites in June 2017, a month before a new cybercrime law was enacted. Meanwhile, in Egypt, a 2018 law classified social-media accounts with more than 5,000 followers as media outlets. "Under the new law, social-media users with a large following can be subject to prosecution for spreading false news or inciting crime," Arab News explained. "The law prohibits the establishment of websites without first obtaining a license from the Supreme Council for the Administration of the Media, a government body with authority to legally suspend or block websites in violation of the country's strict laws, and penalize editors with hefty fines."




Quote for the day:

"Always and never are two words you should always remember never to use." -- Wendell Johnson


Daily Tech Digest - June 13, 2019

The fight to keep open source truly “open” ⁠— open source providers need to stand up image
The benefits of keeping open source open far outweigh its cons. Allowing developers from all backgrounds and training practices to review and modify code means that it is constantly being improved, in turn allowing the entirety of the industry to benefit from innovation, better security and healthy competition. It also gives developers greater mobility. Not only does open source software means developers are free to train and practice in any type of coding language they please, but also that these non-proprietary coding languages become increasingly popular and in-demand; granting developers flexibility in their work and career. Open source platforms — specifically those that remain true to its roots — are so valuable that it can attract hordes of venture capital funding, even if there are no immediate prospects for monetary returns. Jocelyn Goldfein, a partner at venture capital fund Zetta, pointed out: “There’s probably at least two dozen venture firms that invest a lot in open source now.” Nowadays, the default question isn’t why a platform would be open source — rather, why wouldn’t it? 


How edge computing makes 5G essential

With 5G, it is really about decreasing latency and increase bandwidth, and it's being driven more by enterprise applications more so than consumer although you kind of see the consumer influences with people sitting on subways watching movies or playing video games or doing even teleconferencing while they're going to or from work. So those edge applications where the data latency matters, you can think about eMedicine or any of the mission-critical type things that are important with smart cities, you certainly don't want the power to go out while somebody is crossing the street, and your smart sensors go down. That need for the infrastructure out at the edge to be resilient, robust is a critical aspect of what's going to happen with 5G. As we start into that transformation, Vertiv, as a company, is really focused in on how can we support the critical infrastructure at the edge to ensure that capabilities are always on through battery backup, that you don't have thermal runaway in hot locations like Phoenix, or very cold locations, if it's in Alaska or wherever that might happen to be.


How Far Are We From Achieving Artificial General Intelligence?

uncaptioned
Artificial General Intelligence (AGI) can be defined as the ability of a machine to perform any task that a human can. Although the aforementioned applications highlight the ability of AI to perform tasks with greater efficacy than humans, they are not generally intelligent, i.e., they are exceedingly good at only a single function while having zero capability to do anything else. Thus, while an AI application may be as effective as a hundred trained humans in performing one task it can lose to a five-year-old kid in competing over any other task. For instance, computer vision systems, although adept at making sense of visual information, cannot translate and apply that ability to other tasks. On the contrary, a human, although sometimes less proficient at performing these functions, can perform a broader range of functions than any of the existing AI applications of today. While an AI has to be trained in any function it needs to perform with massive volumes of training data, humans can learn with significantly fewer learning experiences. Additionally, humans -- and  agents with artificial general intelligence -- can generalize better to apply the learnings from one experience to other similar experiences.



Tomorrow's Cybersecurity Analyst Is Not Who You Think

First, cybercriminals are becoming much better at penetrating organizations using nontechnical means. With social engineering and phishing techniques, they can bypass organizations' increasingly advanced defenses by manipulating insiders to gain access. Research shows that phishing and social engineering were the most common methods of compromise in 2018, serving as the conduit to the initial point of entry in more than 60% of security breaches in both cloud and point-of-sale environments, as well as in 46% of corporate and internal network breaches. Second, the volume of data in organizations is growing exponentially and is increasingly stored in a more decentralized manner, making it difficult to ensure it's being optimally protected. Research firm IDC predicts the volume of data worldwide will grow tenfold by 2025 to 163 zettabytes, with the majority being created and managed by enterprises. This growth is being driven by the proliferation of artificial intelligence, the Internet of Things, and other machine-to-machine technologies in enterprises across all industries.


Trainline On Track: Innovating And Navigating Change

Trainline
Talent remains a key challenge for businesses – particularly when the knowledge and expertise required for technical roles is so vast. As Director of Engineering, Midgley is tasked with building and leading a team of 300 tech travel specialists. “The thing that I’m most passionate about is engineering culture. We really seek out the very best talent in each of our locations, and it’s always hard. The flip side is that, once these amazing people are through the door, you have to retain them and make them feel rewarded, satisfied, and highly motivated,” he says. To some extent, retaining talent is about giving people room to grow. This might include sending employees on study days and courses, or encouraging them to take part in enriching or educational activities. At the same time, this brings new learnings, approaches, and technologies into the organisation. In terms of working structures, Trainline has adopted the ‘two-pizza rule‘ favoured by Amazon, in which employees form small working groups that could, as the name suggests, be fed by two pizzas. As well as leveraging internal employees as a source of innovation and ideas, the company collaborates with hundreds of international partners.


CAD and PLM: transforming the industrial landscape and shaping how humans work

“When we marry digital with human — which is what AR is all about — we get vastly more productive workers, thanks to the ability of the digital world to monitor, control, and optimise the world of humans,” continued Heppelmann. AR is about virtualising physical infrastructure in the industrial world, with real-time information for safety and productivity. “It should act as a digital mentor.” ... Combining technological innovation with the skills of workers expands human possibilities, and it must be part of business strategy. Humans do not have the capability to shift through the vast amounts of data that factories or indeed, any businesses create. They need a way to look at the whole environment and know where to run analytics to optimise productivity. ... IoT can gather and analyse data of physical machines, which will mitigate the problem of unplanned downtime, which drives down equipment availability — in a factory setting, for example. Machine learning can then be used to predict when this problem will next occur, while AR and VR can be used by the human to view the problem up close in a digital environment; on top of the obvious safety and training benefits.


Cisco launches a developer-community cert program

run digital vanguard business executive with briefcase career growth
Perhaps one of the biggest additions – rolled out here at the company’s Cisco Live customer event – is the new set of professional certifications for developers utilizing Cisco’s growing DevNet developer community. The Cisco Certified DevNet Associate, Specialist and Professional certifications will cover software development for applications, automation, DevOps, cloud and IoT. They will also target software developers and network engineers who develop software proficiency to develop applications and automated workflows for operational networks and infrastructure.  “This certification evolution is the next step to reflect the critical skills network engineers must have to be at the leading edge of networked-enabled business disruption and delivering customer excellence,” said Mike Adams, vice president and general manager of Learning@Cisco. “To perform effectively in this new world, every IT professional needs skills that are broader, deeper and more agile than ever before. And they have to be comfortable working as a multidisciplinary team including infrastructure network engineers, DevOps and automation specialists, and software professionals.”


The Rise of 'Purple Teaming'

Part of what makes Red Teaming and Purple Teaming so valuable is they provide insight into the specific tactics and approaches that attackers might use. Deception technology can enhance this visibility by incorporating deception technology into the testing program. The first benefit comes from detecting attackers early by enticing them to engage with decoys or deception lures. The second comes from gathering full indicators of compromise (IOCs) and tactics, techniques, and procedures (TTPs) into lateral movement activity. This significantly enhances visibility into how and when attackers circumvent security controls, enriching the information that typically results from these exercises. Cyber deceptions deploy traps and lures on the network without interfering with daily operations. A basic deployment can easily be completed in under a day, providing the Blue Team an additional detection mechanism that blends in with the operational environment. This creates more opportunities to detect when the Red Team bypasses a defensive control, forcing team members to be more deliberate with their actions and making simulated attack scenarios more realistic.


Agile vs. Top-Down Management: Leadership Must Evolve as an Organization Matures

Business people at a table
Agile was first used in software development. Its purpose then was to deliver a more relevant product to customers through smaller iterations on a shorter cycle, which gave developers the opportunity to incorporate user feedback into future releases. As other teams and departments appropriated agile, its purpose evolved into accelerating growth by reducing the time-to-value of growth initiatives like marketing while also ensuring a project is effective before ramping it up to scale. The problem with Agile at scale is that time-to-value usually isn’t dependent upon a single team’s ability to execute. Even projects that appear small involve multiple teams, and stakeholders within them are juggling competing priorities and relying upon different feedback sources to inform the direction of projects. When Agile team operates without awareness of the work in the rest of the organization, it can become misaligned from other teams. Real-time communication is necessary between teams to help with these handoff points; otherwise, it’s like teams are playing hot-potato with projects; tossing the ball to someone who may not be ready to catch it.


What is data protection by design and default

To explain how the approach works, we must first break it into its two component parts. The first is data protection by design, which ensures that organisations address information security and privacy in the planning stage of any system, service, product or process that uses personal data. With cyber attacks on the rise, a growing public interest in data privacy and the strengthened penalties introduced by the GDPR, it makes sense to prioritise information security. If you don’t, you’ll be left trying to tack security controls onto existing set-ups. This could lead to improperly implemented controls that expose vulnerabilities, and expensive restructuring projects. ... Data protection by default ensures that organisations conduct data processing activities only if they are necessary to achieve a specific goal. As such, it links to the GDPR’s principles of data minimisation and purpose limitation. One way to achieve this is to give data subjects the strongest possible privacy settings by default – hence the name. This helps prevent data being collected excessively, and gives the data subject the option to consent to more extensive data practices if they want to use other services.



Quote for the day:


"Wisdom comes from experience. Experience is often a result of lack of wisdom." -- Terry Pratchett


Daily Tech Digest - June 12, 2019

IoT security vs. privacy: Which is a bigger issue?

ringvideodoorbellpro
Predictably, most of the teeth-gnashing has come on the consumer side, but that doesn’t mean enterprises users are immune to the issue. One the one hand, just like consumers, companies are vulnerable to their proprietary information being improperly shared and misused. More immediately, companies may face backlash from their own customers if they are seen as not properly guarding the data they collect via the IoT. Too often, in fact, enterprises shoot themselves in the foot on privacy issues, with practices that range from tone-deaf to exploitative to downright illegal—leading almost two-thirds (63%) of consumers to describe IoT data collection as “creepy,” while more than half (53%) “distrust connected devices to protect their privacy and handle information in a responsible manner.” ... Police in more than 50 cities and towns across the country are apparently offering free or discounted Ring doorbells, and sometimes requiring the recipients to share footage for use in investigations. Many privacy advocates are troubled by this degree of cooperation between police and Ring, but that’s only part of the problem. Last year, for example, Ring workers in Ukraine reportedly watched customer feeds. Amazingly, though, even that only scratches the surface of the privacy flaps surrounding Ring.



Researchers crack digital safe using HSM flaw


The researchers found that the firmware built into the module was signed, but not encrypted. This meant that they could analyze how it worked, and they found that it allowed them to upload and run additional custom code. They used the software development kit (SDK) provided with the HSM to upload a custom firmware module to the unit. This gave them access to a shell inside the HSM that they could use to run a debugger and analyze the inner workings of the unit. From there, they ran a fuzzer, which sends a lot of queries to the HSM’s PKCS #11 API. PKCS #11 is a cryptographic API created by RSA. They hit the API with a large number of parameters looking for data that might throw the HSM into an unstable state. These tests uncovered several buffer overflow error bugs that they could trigger by sending the HSM certain commands. The researchers were able to write a module that they could run as unsigned custom firmware on the HSM that enabled them to dump all its secrets. They could recover keys, read secrets directly from the HSM’s memory, and dump the contents of the module’s flash storage, including its decryption key.


Combine containers and serverless to optimize app environments


Serverless is a new and misleading label for an old concept: run applications or scripts on demand without provisioning the runtime infrastructure beforehand. SaaS apps, such as Google Docs, might be considered serverless; when users create a document, they don't have to provision the back-end system that runs the application. Serverless takes this concept to application code, which is abstracted from its various infrastructure services, such as storage, databases, machine learning systems and streaming data processing. Google Cloud emphasizes that serverless functions aren't limited to event-driven code execution, but rather include many of its IaaS and PaaS products that instantiate and terminate on demand and don't require prior setup. On cloud serverless platforms, like AWS Lambda and Azure Functions, functions run code in response to an event trigger, such as an event on a message queue or notification service, and are typically used for short-duration jobs that handle tasks such as data acquisition, filtering and transformation, application integration and user input.


Ensuring trust in an age of digital banking

First, the bank needs to be sustainable. That includes following a code of conduct: integrating sustainability risk in processes and strengthening policies and enabling transparent reporting, as well as conducting the work that prevents the bank from being used for different types of financial crime. This is our license to operate. Second, we develop financial services with positive climate impact as a response to our customers’ needs. We have a very proud 10-year history of offering green bonds. Last year we launched green mortgages. In January, we launched our first blue bond [for investing in marine conservation projects], and we also offer green car leasing. We are trying to cater to customer demand. We understand that people care about what they do with their money. We have a very ambitious plan to introduce more financial solutions that capture what every single individual cares about. Today there is a good array of different products and services with positive climate impact, but it is still too little to meet the growing demand.


Hybrid Development: The Value at the Intersection of TDD, DDD, and BDD

Test Driven Development
What is the best way to tackle a large development project? You break it down into smaller, more manageable segments, or in the case of DDD - domains. When you split the project into smaller domains, you can have segregated teams handle the functionality of that domain end-to-end. And to best understand those domains, you enlist the help of domain experts; someone that understands the problem and that realm of knowledge more than anyone else.  Typically, the domain expert is not the one who is responsible for developing the solution, rather, DDD collectively is used to help bridge the knowledge gap that usually exists between these experts and the solution that is trying to be realized. Through models, context, and ubiquitous language, all parties involved should have a clear understanding of what the particular problems are and how the ensuing build will be structured. ... As the complexity of your projects grow, the only way to maintain the viability of your build and ensure success is to have your development practices grow with it.


Reaping the benefits of a strong strategy-driven business analytics IQ

Analytics IQ is a measure of an organization’s ability to leverage analytics to support business and IT objectives. Many organizations start their analytics journey eagerly, but without a clear strategy. This approach often leads to failed pilot projects, which have not provided the needed insights to answer business questions. Let us take a step back and first focus on analytics. It is easier to understand analytics when you understand the process that data goes through to become actual, actionable intelligence, rather than unusable numbers and words. I like to think about it in terms of retail. The price of an item is just plain data. However, when we add additional indicators, e.g., the price is attached to a celebrity’s merchandise, and recently, that person was involved in a controversy — then this data becomes information, something of interest to us. The information can then be used to try and predict what will happen to the price of this merchandise in the following days. That is intelligence: When we add context to information, it becomes intelligence.


Triada backdoors were pre-installed on Android devices


The story of Triada began when Kaspersky Lab researchers discovered it in early 2016, and at that time the main purpose of the Android malware was "to install spam apps on a device that displays ads," according to Google. Last week, Lukasz Siewierski, a reverse engineer on the Android security and privacy team at Google, explained that Triada was much more advanced than previously thought. "The methods Triada used were complex and unusual for these types of apps," Siewierski wrote in a blog post. "Triada apps started as rooting Trojans, but as Google Play Protect strengthened defenses against rooting exploits, Triada apps were forced to adapt, progressing to a system image backdoor." While Google added features to Android to protect against threats like Triada, the threat actors behind the malware took another unusual approach in the summer of 2017 and performed a supply chain attackto get the backdoor malware preinstalled on budget phones.


What Stands Out in Proposed Premera Lawsuit Settlement?

Technology attorney Steven Teppler points to the attention given to "fixing" the health insurer's security problems. The proposed agreement, which was filed on May 31 in a federal court in Oregon, would settle a class action lawsuit that consolidated more than 40 lawsuits filed after the data breach was revealed in March 2015 by the Seattle-based insurer. It awaits court approval. The settlement proposes $32 million for breach victims and related legal costs and would require the health insurer to invest $42 million in bolstering data security. The settlement "not only takes care of victims, but takes care of business internally at the organization to make sure there are resources devoted to fixing or mitigating the security problem, but also that there are ways to establish milestones to make sure what is promised is actually done," Teppler says in an interview with Information Security Media Group. Under the settlement, Premera would spend at least $14 million annually over the next three years on enhanced data security measures.


5 ways to achieve a risk-based security strategy


A risk-based security approach, on the other hand, identifies the true risks to an organization's most valuable assets and prioritizes spending to mitigate those risks to an acceptable level. A security strategy shaped by risk-based decisions enables an organization to develop more practical and realistic security goals and spend its resources in a more effective way. It also delivers compliance, not as an end in itself, but as natural consequence of a robust and optimized security posture. Although a risk-based security strategy requires careful planning and ongoing monitoring and assessment, it doesn't have to be an overly complex process. There are five key steps to implementing risk-based security, and though time-consuming, they will align security with the goals of the organization. Board-level support is paramount. Input from numerous stakeholders throughout the organization is essential, as risk mitigation decisions can have a serious effect on operations which security teams may not fully appreciate if they make these decisions in isolation.


Large firms look to zero-trust security to reduce cyber risk


Essentially, a zero-trust approach is about applying authentication and authorisation to ensure that all traffic within an enterprise is properly authenticated and authorised, whether it is someone coming in from the outside on a VPN connection, an application talking to another application on the network, or a user trying to use an application on the network. “The data from the survey shows many similarities between the various countries in terms of the gaps and threats that large enterprises need to deal with with respect to secure access,” said Scott Gordon, chief marketing officer at Pulse Secure. “Perhaps the most significant difference in secure access priorities was more focus on improving endpoint security and remediation prior to access in the US (57%) compared with 43% in the UK and just 31% in German, Austria and Switzerland. This trend also matches higher IoT adoption in the US, although Europe is catching up fast.” A key takeaway from this report, said Gordon, is that large organisations across Europe are dealing with an increasingly hybrid IT environment.



Quote for the day:


"Though nobody can go back and make a new beginning... Anyone can start over and make a new ending." -- Chico Xavier