Daily Tech Digest - December 04, 2020

The evolving role of operations in DevOps

To better understand how DevOps changes the responsibilities of operations teams, it will help to recap the traditional, pre-DevOps role of operations. Let’s take a look at a typical organization’s software lifecycle: before DevOps, developers package an application with documentation, and then ship it to a QA team. The QA teams install and test the application, and then hand off to production operations teams. The operations teams are then responsible for deploying and managing the software with little-to-no direct interaction with the development teams. These dev-to-ops handoffs are typically one-way, often limited to a few scheduled times in an application’s release cycle. Once in production, the operations team is then responsible for managing the service’s stability and uptime, as well as the infrastructure that hosts the code. If there are bugs in the code, the virtual assembly line of dev-to-qa-to-prod is revisited with a patch, with each team waiting on the other for next steps. This model typically requires pre-existing infrastructure that needs to be maintained, and comes with significant overhead. While many businesses continue to remain competitive with this model, the faster, more collaborative way of bridging the gap between development and operations is finding wide adoption in the form of DevOps.


Monitoring Microservices the Right Way

The common practice by StatsD and other traditional solutions was to collect metrics in push mode, which required explicitly configuring each component and third-party tool with the metrics collector destination. With the many frameworks and languages involved in modern systems it has become challenging to maintain this explicit push-mode sending of metrics. Adding Kubernetes to the mix increased the complexity even further. Teams were looking to offload the work of collecting metrics. This was a distinct strongpoint of Prometheus, which offered a pull-mode scraping, together with service discovery of the components ("targets" in Prometheus terms). In particular, Prometheus shined with its native scraping from Kubernetes, and as Kubernetes’s demand skyrocketed so did Prometheus. As the popularity of Prometheus grew, many open source projects added support for the Prometheus Metrics Exporter format, which has made metrics scraping with Prometheus even more seamless. Today you can find Prometheus exporters for many common systems including popular databases, messaging systems, web servers, or hardware components.


Will Blockchain Replace Clearinghouses? A Case Of DVP Post-Trade Settlement

Blockchain technology can improve settlement processes substantially. First, using a blockchain makes it possible to decrease counterparty risk as it enables a trustless settlement process that is similar to DVP settlement in a way that the delivery of an asset is directly linked to the instantaneous payment for the asset.  Therefore, atomic swaps enable direct “barter” operations when one tokenized asset is directly exchanged for another tokenized asset (delivery versus delivery). Here, “directly exchanged” means that the technology guarantees that both transfers have to happen. It is technologically not possible that only one transfer is executed if the other transfer is interrupted for whatever reason. Besides, if a blockchain is used for settlement, a third party intermediary that helps to facilitate settlement in the case of a conventional, not DLT-based DVP is no longer necessary. This implies peer-to-peer settlement that leads to substantial cost savings for the settlement. In addition, cross-chain atomic swaps cover more complex cases such as trustless settlement among more than two parties.


Cloud native security: A maturing and expanding arena

Along with the usual array of preventative controls that are deployed as part of a cloud native platform, companies need to focus on detection and response to breaches. It’s important to note that the usual toolsets that are put in place will need to be supplemented by cloud native tools that can provide targeted visibility into container-based workflows. Projects like Falco, which can integrate with container workloads at a low level, are an important part of this. Additionally, companies should make sure to properly use the facilities that Kubernetes provides. For example, Kubernetes audit logging is rarely enabled by default, but it’s an important control for any production cluster. A key takeaway for container security deployments is the importance of getting security controls in place before workloads are placed into production. Ensuring that developers are making use of Kubernetes features like Security Contexts to harden their deployments will make the deployment of mandatory controls much easier. Also ensuring that a “least privilege” initial approach is taken to network traffic in a cluster can help avoid the “hard shell, soft inside” approach to security that allows attackers to easily expand their access after an initial compromise has occurred.


Cloud computing in the real world: The challenges and opportunities of multicloud

In an ideal world, application workloads -- whatever their heritage -- should be able to move seamlessly between, or be shared among, cloud service providers (CSPs), alighting wherever the optimal combination of performance, functionality, cost, security, compliance, availability, resilience, and so on, is to be found -- while avoiding the dreaded 'vendor lock-in'. "Businesses taking a multicloud approach can cherry-pick the solutions that best meet their business needs as soon as they become available, rather than having to wait for one vendor to catch up," John Abel, technical director, office of the CTO, Google Cloud, told ZDNet. "Avoiding vendor lock in, increased agility, more efficient costs and the promise of each provider's best solutions are all too great to ignore." That's certainly the view taken by many respondents to the survey underpinning the 2020 State of Multicloud report from application resource management company Turbonomic. ... "Bottom-line, cultural change is a prerequisite for managing the complexity of today's hybrid and multicloud environments. Teams must operate faster, dynamically adapting to shifting market trends to stay competitive.


An Architect's guide to APIs: REST, GraphQL, and gRPC

The benefit of taking an API-based approach to application architecture design is that it allows a wide variety of physical client devices and application types to interact with the given application. One API can be used not only for PC-based computing but also for cellphones and IoT devices. Communication is not limited to interactions between humans and applications. With the rise of machine learning and artificial intelligence, service-to-service interaction facilitated by APIs will emerge as the Internet's principal activity. APIs bring a new dimension to architectural design. However, while network communication and data structures have become more conventional over time, there is still variety among API formats. There is no "one ring to rule them all." Instead, there are many API formats, with the most popular being REST, GraphQL, and gRPC. Thus a reasonable question to ask is, as an Enterprise Architect, how do I pick the best API format to meet the need at hand? The answer is that it's a matter of understanding the benefits and limitations of the given format.


Unlock the Power of Omnichannel Retail at the Edge

The Edge exists wherever the digital world and physical world intersect, and data is securely collected, generated, and processed to create new value. According to Gartner, by 2025, 75 percent6 of data will be processed at the Edge. For retailers, Edge technology means real-time data collection, analytics and automated responses where they matter most — on the shop floor, be that physical or virtual. And for today’s retailers, it’s what happens when Edge computing is combined with Computer Vision and AI that is most powerful and exciting, as it creates the many opportunities of omnichannel shopping. With Computer Vision, retailers enter a world of powerful sensor-enabled cameras that can see much more than the human eye. Combined with Edge analytics and AI, Computer Vision can enable retailers to monitor, interpret, and act in real-time across all areas of the retail environment. This type of vision has obvious implications for security, but for retailers it also opens up huge possibilities in understanding shopping behavior and implementing rapid responses. For example, understanding how customers flow through the store, and at what times of the day, can allow the retailer to put more important items directly in their paths to be more visible.


Hacking Group Used Crypto Miners as Distraction Technique

The use of the monero miners helped the hacking group establish persistence within targeted networks and enabled them to deploy other spy tools and malware without raising suspicion. That's because cryptocurrency miners are usually low-level security priorities for most organizations, according to Microsoft. "Cryptocurrency miners are typically associated with cybercriminal operations, not sophisticated nation-state actor activity," the Microsoft report notes. "They are not the most sophisticated type of threats, which also means that they are not among the most critical security issues that defenders address with urgency. Recent campaigns from the nation-state actor Bismuth take advantage of the low-priority alerts coin miners cause to try and fly under the radar and establish persistence." The Microsoft report also notes: "While this actor's operational goals remained the same - establish continuous monitoring and espionage, exfiltrating useful information as it surfaced - their deployment of coin miners in their recent campaigns provided another way for the attackers to monetize compromised networks."


Cypress vs. Selenium: Compare test automation frameworks

Selenium suits applications that don't have many complex front-end components. Selenium's support for multiple languages makes it a good choice as the test automation framework for development projects that aren't in JavaScript. Selenium is open source, has ample documentation and is well supported by many other open source tools. Also, when a project calls for behavior-driven development (BDD), organizations find Selenium fits the approach well, as many libraries, like Cucumber or Capybara, make writing tests within BDD structured and implementable. Cypress is a great tool to automate JavaScript application testing. And that's a large group, as JavaScript is the language of choice for many modern web applications. Cypress integrates well with the client side and asynchronous design of these applications, as it natively ties into the web browser. Thus, test scripts run much quicker and more reliably than they would for the same application tested with Selenium for automation. Cypress might be better suited for a testing team with programming experience, as JavaScript is a complex single-threaded, non-blocking, asynchronous, concurrent language.


The Complexity of Product Management and Product Ownership

An issue for organisational leaders considering how to design for product flow is that when someone says product ownership or product management we are immediately uncertain which of many possible definitions the person is referring to. This level of ambiguity is a constant struggle in the software world. Agile, DevOps, and Digital are all now terms which are the subject of confusion and passionate neverending debates. Product ownership/management has now joined them. Kent Beck described a similar issue in software teams when everyone has slightly different concepts in their minds when describing system components. He called this the problem of metaphor and prescribed it as a key practice in eXtreme programming. We need to take this practice of System Metaphor to our wider discussions as product delivery groups if we are going to resolve bigger issues. To help consider some of the Metaphor surrounding the product owner function I highly recommend the blog by Roman Pilcher (an author on product management) He does a good job of creating metaphors for the key variations in product management roles.



Quote for the day:

"Real generosity towards the future lies in giving all to the present." -- Camus

Daily Tech Digest - December 03, 2020

The Service Factory of the Future

The service factory of the future will break the compromise between personalization and industrialization by leveraging standard service bits: small elements of service, such as a chatbot or an online shopping cart. Service bits will increasingly consist of “microservices”—digitized service offerings or processes—that are accessed through APIs and either created in-house or procured from ecosystem partners. Bits can also be automated or manual service activities based on legacy IT systems. By flexibly combining service bits, the service factory of the future will be able to create hyperpersonalized offerings and packages tailored to an individual’s needs, preferences, and habits on the basis of a wide range of customer data. Migration to the service factory of the future requires transformative change in five critical dimensions: customer experience, service delivery, digital technology, people and organization, and digital ecosystems. ... The service factory of the future will enable providers to be predictive, preventive, and proactive. It will anticipate customers’ needs and approach them with solutions and hyperpersonalized experiences. More important, it will develop capabilities to prevent service lapses from occurring in the first place.


FBI: BEC Scams Are Using Email Auto-Forwarding

The first was detected in August when fraudsters used the email forwarding feature in the compromised accounts of a U.S.-based medical company. The attackers then posed as an international vendor and tricked the victim to make a fraudulent payment of $175,000, according to the alert. Because the targeted organization did not sync its webmail with its desktop application, it was not able to detect the malicious activity, the FBI notes. In a second case in August, the FBI found fraudsters created three forwarding rules within a compromised email account. "The first rule auto-forwarded any email with the search terms 'bank,' 'payment,' 'invoice,' 'wire,' or 'check' to cybercriminals' email accounts," the alert notes. "The other two rules were based on the sender's domain and again forwarded to the same email addresses." Chris Morales, head of security analytics at security firm Vectra AI, says that in addition to reaping fraudulent payments, fraudsters can use email-forwarding to plant malware or malicious links in documents to circumvent prevention controls or to steal data and hold it for ransom. In in a keynote presentation at Group-IB's CyberCrimeCon 2020 virtual conference in November, Craig Jones, director of cybercrime at Interpol, noted that BEC scammers are among the threat actors that are retooling their attacks to take advantage of the COVID-19 pandemic.


Robots Can Now Have Tunable Flexibility & Improved Performance

Generally, the mechanics of obliging inflexibility variances can be massive with ostensible territory, while curved origami can minimalistically uphold an extended stiffness scale with on-demand flexibility. The structures shrouded in Jiang and team’s research consolidate the collapsing energy at the origami wrinkles with the bending of the panel, tuned by switching among numerous curved creases between two points. Curved origami empowers a single robot to achieve a variety of movements. A pneumatic, swimming robot created by the team can achieve a scope of nine distinct movements, including quick, medium, slow, straight and rotational developments, by essentially changing which creases are utilized. The team’s exploration centered around joining the folding energy at origami creases with the board bending, which is tuned by moving along various creases between two points. With curved origami, a single robot is equipped for undertaking different movements. For instance, the team built up a swimming robot that had nine unique movements, for example, quick, slow, medium, straight, and rotational. To achieve any of these, the creases simply should be changed.


Migrating a Monolith towards Microservices with the Strangler Fig Pattern

One of the few benefits of the Zope framework is the fragile nature of the software has forced us to work in small increments, and ship in frequent small releases. Having unreleased code laying around for more than a few hours has led to incidents around deployment, like accidental releases or code being overwritten. So the philosophy has been "write it and ship it immediately". Things like feature toggles and atomic releases were second nature. Therefore, when we designed the wrapper and the new service architectures, feature toggles were baked in from the start (if a little crude in the first cuts). Therefore, from the early days of the project code was being pushed to live within hours of being committed. Moving to a framework like Flask enabled "proper" CI pipelines, which can perform actual checks on the code. Whilst a deployment into production is manually initiated, all other environment builds and deployment are initiated by a commit into a branch. The aim is to keep the release cadence the same as it has been with Zope. Changes are small, with multiple small deployments a day rather than massive "releases". We then use feature toggles to enable functionality in production.


Misconfigured Docker Servers Under Attack by Xanthe Malware

“Once all possible keys have been found, the script proceeds with finding known hosts, TCP ports and usernames used to connect to those hosts,” said researchers. “Finally, a loop is entered which iterates over the combination of all known usernames, hosts, keys and ports in an attempt to connect, authenticate on the remote host and launch the command lines to download and execute the main module on the remote system.” Misconfigured Docker servers are another way that Xanthe spreads. Researchers said that Docker installations can be easily misconfigured and the Docker daemon exposed to external networks with a minimal level of security. Various past campaigns have been spotted taking advantage of such misconfigured Docker installations; for instance, in September, the TeamTNT cybercrime gang was spotted attacking Docker and Kubernetes cloud instances by abusing a legitimate cloud-monitoring tool called Weave Scope. In April, an organized, self-propagating cryptomining campaign was found targeting misconfigured open Docker Daemon API ports; and in October 2019, more than 2,000 unsecured Docker Engine (Community Edition) hosts were found to be infected by a cyptojacking worm dubbed Graboid.


Finding rogue devices in your network using Nmap

Just knowing what ports are open is not enough, as many times, these services may be listening on non-standard ports. You will also want to know what software and version are behind the port from a security perspective. Thanks to Nmap's Service and Version Detection capabilities, it is possible to perform a complete network inventory and host and device discovery, checking every single port per device or host and determining what software is behind each. Nmap connects to and interrogates each open port, using detection probes that the software may understand. By doing this, Nmap can provide a detailed assessment of what is out there rather than just meaningless open ports. ... Rogue DHCP servers are just like regular DHCP servers, but they are not managed by the IT or network staff. These rogue servers usually appear when users knowingly or unknowingly connect a router to the network. Another possibility is a compromised IoT device such as mobile phones, printers, cameras, tablets, smartwatches, or something worse, such as a compromised IT application or resource. Rogue DHCP servers are frustrating, especially if you are trying to deploy a fleet of servers using PXE, as PXE depends heavily on DHCP. 


Digital transformation, innovation and growth is accelerated by automation

Automation is a key digital transformation trend for 2021 and beyond. Here are some key findings regarding the importance of process automation. According to Salesforce, 81% of IT organizations will automate more tasks to allow team members to focus on innovation over the next 12-18 months. McKinsey notes that 57% of organizations say they are at least piloting automation of processes in one or more business units or functions. And 31% of IT decision makers say that automation is a key business initiative tied to digital transformation, per MuleSoft. Integration continues to be a challenge for process automation. Sixty percent of line of business users agree that an inability to connect systems, applications, and data hinders automation initiatives. The future of automation is declarative programming. "In 2021, we'll see more and more systems be intent-based, and see a new programming model take hold: a declarative one. In this model, we declare an intent - a desired goal or end state - and the software systems connected via APIs in an application network autonomously figure out how to simply make it so," said Uri Sarid, CTO, MuleSoft. McKinsey estimates that automation could raise productivity in the global economy by up to 1.4% annually. 


Why microlearning is the key to cybersecurity education

Most organizations are used to relatively “static” training. For example: fire safety is fairly simple – everyone knows where the closest exit is and how to escape the building. Worker safety training is also very stagnant: wear a yellow safety vest and a hard hat, make sure to have steel toed shoes on a job site, etc. The core messages for most trainings don’t evolve and change. That’s not the case with cybersecurity education and training: attacks are ever-changing, they differ based on the targeted demographic, current affairs, and the environment we are living in. Cybersecurity education must be closely tied to the value and mission of an organization. It must also be adaptable and evolve with the changing times. Microlearning and gamification are new ways to help encourage and promote consistent cybersecurity learning. This is especially important because of the changing demographics: there are currently more millennials in the workforce than baby boomers, but the training methods have not altered dramatically in the last 30 years. Today’s employee is younger, more tech-savvy and socially connected. Modern training needs to acknowledge and utilize that.


Cut IT Waste Before IT Jobs

While it is impossible to fully correlate the impact of ITAM on job retention, we can illustrate the opportunity with some simple sums. Starting with Gartner’s latest Worldwide IT Spending Forecast, the total spend next year on Data Center Systems, Enterprise Software, and Devices (the three areas of IT spend that ITAM can address) will be $1.35 trillion. If ITAM can reduce this spending by just 5% (which we have already said is a very conservative estimate for the industry), that alone equates to over $67.7 billion of potential savings from ITAM alone. If just some of these savings were applied toward talent retention, they could protect hundreds of thousands of jobs around the world. Before IT departments slash critical projects or lay off staff, we urge them to look at their IT spend first to see where savings could be made. Remember that cutting IT jobs doesn’t just reduce the bottom line, it means the removal of talent, careers and institutional knowledge -- in comparison to IT waste, which is removing unused or unwanted resources with no impact whatsoever on delivery of services. What’s more, with many IT purchases having been rushed through during the March/April period to support home working, there is a high likelihood of “bloatware” across organizations that could yield higher than average savings than you would typically expect in an ITAM project.


Covid-19 vaccine supply chain attacked by unknown nation state

The X-Force team said its analysis pointed to a “calculated operation” starting in September, spanning six countries and targeting organisations associated with international vaccine alliance Gavi’s Cold Chain Equipment Optimisation Platform (CCEOP). It was unable to precisely attribute the campaign, but said that both precision targeting of key executives at relevant organisations bore the “potential hallmarks of nation-state tradecraft”. IBM senior strategic cyber threat analyst Claire Zaboeva wrote: “While attribution is currently unknown, the precision targeting and nature of the specific targeted organisations potentially point to nation-state activity. “Without a clear path to a cash-out, cyber criminals are unlikely to devote the time and resources required to execute such a calculated operation with so many interlinked and globally distributed targets. Likewise, insight into the transport of a vaccine may present a hot black-market commodity. ...” According to IBM X-Force, the attacker has been impersonating an executive at Haier Biomedical, a cold chain specialist, to target organisations including the European Commission’s Directorate General for Taxation and Customs Union, and companies in the energy, manufacturing, website creation and software and internet security sectors.



Quote for the day:

"Every great leader can take you back to a defining moment when they decided to lead." -- John Paul Warren

Daily Tech Digest - December 02, 2020

Establish AI Governance, Not Best Intentions, to Keep Companies Honest

Transparency is necessary to adapt analytic models to rapidly changing environments without introducing bias. The pandemic’s seesawing epidemiologic and economic conditions are a textbook example. Without an auditable, immutable system of record, companies have to either guess or pray that their AI models still perform accurately.  This is of critical importance as, say, credit card holders request credit limit increases to weather unemployment. Lenders want to extend as much additional credit as prudently possible, but to do so, they must feel secure that the models assisting such decisions can still be trusted. Instead of ferreting through emails and directories or hunting down the data scientist who built the model, the bank’s existing staff can quickly consult an immutable system of record that documents all model tests, development decisions and outcomes. They can see what the credit origination model is sensitive to, determine if features are now becoming biased in the COVID environment, and build mitigation strategies based on the model’s audit investigation. Responsibility is a heavy mantle to bear, but our societal climate underscores the need for companies to use AI technology with deep sensitivity to its impact. 


The three stages of security risk reprioritization

As organizations currently undergo planning and budget allocation for 2021, they are looking to invest in more permanent solutions. IT teams are trying to understand how they can best invest in solutions that will ensure a strong security posture. There’s also a greater importance in starting to understand the greater need for complete visibility into the endpoint, even as devices are operating on remote networks. Policies are being created around how much work should actually be done on a VPN and by default creating more forward-looking permanent policies and technology solutions. But as security teams embrace new tools for security and operations to enable continuity efforts, it also generates new attack vectors. COVID-19 has presented the opportunity for the IT community to evaluate what can and can’t be trusted, even when operating under Zero Trust architectures. For example, some of the technologies, like VPN, can undermine what they were designed for. At the beginning of the pandemic, CISA issued a warning around the continued exploitation of specific VPN vulnerabilities. 


Updates To The Open FAIR Body Of Knowledge Part 2

The Open FAIR BoK Update Project Working Group made a deliberate effort to more logically present information in O-RA. In Section 4: Risk Measurement: Modeling and Estimate, the ideas of accuracy and precision are now presented before the concepts of subjectivity and objectivity, and the section ends with the concepts of estimates and calibration. O-RA now also emphasizes having usefully precise estimates; in other words, an estimate is usefully precise if more precision would not improve or change the decision being made with the information. The concept of “Confidence Level in the Most Likely Value” as a parameter to model estimates has been removed from O-RA in bringing it to Version 2.0. Instead, this concept has been replaced by the choice of distribution that best represents what the Open FAIR risk analyst knows about the risk factor being modelled; however, Open FAIR is agnostic on the distribution type used. O-RA Version 2.0 also takes inspiration the Open FAIR™ Risk Analysis Process Guide to better define how to do an Open FAIR risk analysis in Section 5: Risk Analysis Process and Methodology. To do this, O-RA specifies that a risk analyst must first scope the analysis by identifying a Loss Scenario (Stage 1). The Loss Scenario is the story of loss that forms a sentence from the perspective of the Primary Stakeholder.


'Return to Office' Phishing Emails Aim to Steal Credentials

In the phishing campaign uncovered by Abnormal Security, the emails are disguised as an automated internal notification from the company as indicated by the sender's display name. "But the sender's actual address is 'news@newsletterverwaltung.de,' an otherwise unknown party," the research report states. "Further, the IP originates from a blacklisted VPN service that is not consistent with the corporate IP, which indicates the sender is impersonating the automated internal system." The emails, sent to specific employees, contain an HTML attachment that bears the recipient's name, which lures employees into opening it. The email also contains text that makes it seem as if the recipient has received a voicemail, researchers state. By clicking on the attachment, the user is redirected to a SharePoint document with new instructions on the company's remote working policy. "Underneath the new policy, there is text that states 'Proceed with acknowledgement here.' Clicking on this link redirects the user to the attack landing page, which is a form to enter the employee's email credentials," researchers note. Once a recipient falls victim to this trap, the login credentials for their email account are harvested.


CIO interview: John Davidson, First Central Group

“Intelligent automation means so much more for us than an efficiency tool,” says Davidson. “We are building an entirely new technical competency into our business, so that it becomes part of our DNA. This not only changes operational execution but, importantly, changes the management mindset about the art of the possible and strategic decision-making.” The automated renewal process is another area where Blue Prism has been deployed. With the support of Blue Prism’s partner, IT and automation consultancy T-Tech, the First Central team can check for accuracy the issue of more than 3,000 renewal invitations daily in just two hours. The new process verifies each renewal notice, removing the need for costly, time-intensive manual work downstream to correct anomalies and reduce the risk of a regulatory incident. Along with driving operational efficiencies, Davidson believes RPA also boosts business confidence. “Risk mitigation is a lot more intangible, but can measure the cost of distraction and can measure our effectiveness from a robotics perspective,” he says. Davidson’s team has established a robotics capability for the business capability. “It is not my job to close down operational risk,” he says. “That’s the responsibility of the process owner. My team has to deliver technology that closes down the risk.


Q&A on the Book The Power of Virtual Distance

Virtual work gives us many options as to where, when and how to work. And this is highly useful and a positive development. However, as we discovered from the beginning, the trade-offs and unintended consequences are extensive and need to be corrected. When we work mainly through screens, the human contextual markers that guide our cognitive and emotional selves, to know who we can trust and under what circumstances, disappear behind virtual curtains. We have shown conclusively that high Virtual Distance is the statistical equivalent of Distrust, while lower Virtual Distance results in the strong trust bonds we need to build relationship foundations that ultimately result in both better work product as well as higher levels of well-being. Recently a senior executive from a large global company expressed his concern regarding the fact that many leaders do not trust their employees to work virtually. And we’ve found that it’s a two-way street, as many employees don’t trust their leaders to assess or treat them fairly under these conditions. The erosion of trust was highly problematic before Covid19. Now, it’s risen to the level of a “crisis of distrust”. 


Why I'd Take Good IT Hygiene Over Security's Latest Silver Bullet

The most common way to perform lateral movement is to reuse privileges in the assets that attackers have a foothold on, such as secrets and credentials stored on breached machines. Vendors will preach that they can distinguish between legitimate traffic and lateral movements — to even automatically block such illicit activity. They'll use terms like machine learning and AI to make their product sound advanced, but these capabilities are very limited. The product may block well-known malware that performs the exact same sequence in any invocation and hence was "signed" by them — making such products glorified, network-based, signature-matching systems. But because AI and machine learning are based on training, they aren't able to distinguish between legitimate traffic and lateral movement with an accuracy that fully supports runtime prevention. Moreover, no one knows how these applications work in all scenarios. Are you willing to block traffic just because it hasn't been seen before? Or what about an edge case in the app it's never seen? On the other hand, managing lateral movement risk is definitely possible. This can be done by analyzing the secrets and privileges stored and associated with any given asset and determining if they're overly permissive. 


Automation Justification

The human touch is also recommended in code reviews — yes, please use the code grammar checkers and test coverage tools, but getting your code reviewed and reviewing others’ code benefits everyone involved. Sometimes folks worry about the cost of tools and labor to get the process started. Lastly, when starting a larger automation project, do not try to do everything at once. Prioritizing and easing into the automation process makes it simpler and increases the probability it can be done with no loss of functionality. In terms of naysayers, some of the reasons given by humans are “if it ain’t broke, don't fix it,” some don't feel comfortable if they are not in control, sometimes the person does not understand the tools needed, and some folks feel like a computer will replace their job. So what do we do? Show them the metrics that can show improvements, teach them how to use the tools, or just let them know that now that their time is freed up; they can do more meaningful, fun, cool stuff with their time. Alluding back to an earlier slide, here are some metrics that will show your team, your management, and the bean counters some improvement: cost and time savings; test coverage and speedup; customer satisfaction; fewer defects; faster time to release, as well as to recover from issues; and reduced risk.


The vicious cycle of circular dependencies in microservices

In software engineering, modularity refers to the degree to which an application can be divided into independent, interchangeable modules that work together to form a single functioning item that can serve a specific business function. Modularity promotes reusability, better maintenance and manageability and promotes low coupling and high cohesion. Despite the benefits it offers, modular design is still plagued by dependency problems. In a typical microservices architecture, you'll often encounter dependencies among the services and components. Although these services are modeled as isolated, independent units, they still need to communicate for the purpose of data and information exchange. Ideally, a microservices application shouldn't contain circular dependencies. This means that one service should not call another one directly. Instead, those services should operate on event-based triggers. However, reality dictates that most developers will still need to closely link certain parts of an application, and problematic dependencies will persist. A circular dependency is defined as a relationship between two or more application modules that are codependent.


What is cyber insurance? Everything you need to know about what it covers and how it works

Different policy providers might offer coverage of different things, but generally cyber insurance coverage will be likely to cover the immediate costs associated with falling victim to a cyberattack. "Cyber insurance policies are designed to cover the costs of security failures, including data recovery, system forensics, as well as the costs of legal defence and making reparations to customers," says Mark Bagley, VP at cybersecurity company AttackIQ. Underwriting data recovery and system forensics, for example, would help cover some of the cost of investigating and re-mediating a cyberattack by employing forensic cybersecurity professionals to aid in finding out what happened – and fix the issue. This is the sort of standard procedure that follows in the aftermath of a ransomware attack, one of the most damaging and disrupting kinds of incident an organisation can face right now. It is also the case that some cyber insurance companies tcover the cost of actually giving in and paying a ransom – even though that's something that law enforcement and the information security industry doesn't recommend, as it just encourages cyber criminals to commit more attacks.



Quote for the day:

"Leadership is not a position. It is a combination of something you are (character) and some things you do (competence)." -- Ken Melrose

Daily Tech Digest - December 01, 2020

Beginner's Guide to Quantum Machine Learning

Whenever you think of the word "quantum," it might trigger the idea of an atom or molecule. Quantum computers are made up of a similar idea. In a classical computer, processing occurs at the bit-level. In the case of Quantum Computers, there is a particular behavior that governs the system; namely, quantum physics. Within quantum physics, we have a variety of tools that are used to describe the interaction between different atoms. In the case of Quantum Computers, these atoms are called "qubits" (we will discuss that in detail later). A qubit acts as both a particle and a wave. A wave distribution stores a lot of data, as compared to a particle (or bit). Loss functions are used to keep a check on how accurate a machine learning solution is. While training a machine learning model and getting its predictions, we often observe that all the predictions are not correct. The loss function is represented by some mathematical expression, the result of which shows by how much the algorithm has missed the target. A Quantum Computer also aims to reduce the loss function. It has a property called Quantum Tunneling which searches through the entire loss function space and finds the value where the loss is lowest, and hence, where the algorithm will perform the best and at a very fast rate.


How to Develop Microservices in Kubernetes

Iterating from local development to Docker Compose to Kubernetes has allowed us to efficiently move our development environment forward to match our needs over time. Each incremental step forward has delivered significant improvements in development cycle time and reductions in developer frustration. As you refine your development process around microservices, think about ways you can build on the great tools and techniques you have already created. Give yourself some time to experiment with a couple of approaches. Don’t worry if you can’t find one general-purpose one-size-fits-all system that is perfect for your shop. Maybe you can leverage your existing sets of manifest files or Helm charts. Perhaps you can make use of your continuous deployment infrastructure such as Spinnaker or ArgoCD to help produce developer environments. If you have time and resources, you could use Kubernetes libraries for your favorite programming language to build a developer CLI to manage their own environments. Building your development environment for sprawling microservices will be an ongoing effort. However you approach it, you will find that the time you invest in continuously improving your processes pays off in developer focus and productivity.


Enabling the Digital Transformation of Banks with APIs and an Enterprise Architecture

One is the internal and system APIs. Core banking systems are monolith architectures. They are still based on mainframes and COBOL [programming language]. They are legacy technologies and not necessarily coming out of the box with open APIs. Having internal and system APIs helps to speed up the development of new microservices-based on these legacy systems or services that use legacies as back-ends. The second category of APIs is public APIs. These are APIs that connect a bank’s back-end systems and services. They are a service layer, which is necessary for external services. For example, they might be used to obtain a credit rating or address validation. You don’t want to do these validations for yourself when the validity of a customer record is checked. Take the confirmation of postal codes in the U.S. In the process of creating a customer record, you use an API from your own system to link to an external express address validation system. That system will let you know if the postal code is valid or not. You don’t need to have their own internal resources to do that. And the same applies, obviously, to credit rating, which is information that you can’t have as a bank. The third type of API, and probably the most interesting one, is the public APIs that are more on the service and front-end layers.


Can't Afford a Full-time CISO? Try the Virtual Version

For a fraction of the salary of a full-time CISO, companies can hire a vCISO, which is an outsourced security practitioner with executive level experience, who, acting as a consultant, offers their time and insight to an organization on an ongoing (typically part-time) basis with the same skillset and expertise of a conventional CISO. Hiring a vCISO on a part-time (or short-term basis) allows a company the flexibility to outsource impending IT projects as needed. A vCISO will work closely with senior management to establish a well communicated information security strategy and roadmap, one that meets the requirements of the organization and its customers, but also state and federal requirements. Most importantly, a vCISO can provide companies unbiased strategic and operational leadership on security policies, guidelines, controls, and standards, as well as regulatory compliance, risk management, vendor risk management, and more. Since vCISOs are already experts, it saves the organization time and money by decreasing ramp-up time. Businesses are able to eliminate the cost of benefits and full-time employee onboarding requirements. 


Why the insurance industry is ready for a data revolution

As it stands today, when a customer chooses a traditional motor insurance policy and is provided with a quote, the price they are given will be based on broad generalisations made about their personal background as an approximate proxy for risk. This might include their age, their gender, their nationality, and there have even been examples of people being charged hundreds of pounds more for policies because of their name. If this kind of profiling took place in other financial sectors, there would be outcry, so why is insurance still operating with such an outdated model? Well, up until now, there has been little innovation in the insurance sector and as a result, little alternative in the way that policies can be costed. But now, thanks to modern telematics, the industry finally has the ability to provide customers with an accurate and fair policy, based on their true risk on the road: how they really drive. Telematics works by monitoring and gathering vehicle location and activity data via GPS and today we can track speed, the number of hours spent in the vehicle, the times of the day that customers are driving, and even the routes they take. We also have the technology available to consume and process swathes of this data in real time.


Foiling RaaS attacks via active threat hunting

One of the tactics that really stands out, and they’re not the only attackers to do it, but they are one of the first to do it, is actually making a copy and stealing the victim’s data prior to the ransomware payload execution. The benefit that the attacker gets from this is they can now leverage this for additional income. What they do is they threaten the victim to post sensitive information or customer data publicly. And this is just another element of a way to further extort the victim and to increase the amount of money that they can ask for. And now you have these victims that have to worry about not only having all their data taken from them, but actual public exposure. It’s becoming a really big problem, but those sorts of tactics – as well as using social media to taunt the victim and hosting their own infrastructure to store and post data – all of those things are elements that prior to seeing it used with Ransomware-as-a Service, were not widely seen in traditional enterprise ransomware attacks. ... You can’t trust that paying them is going to keep you protected. Organizations are in a bad spot when this happens, and they’ll have to make those decisions on whether it’s worth paying.


Sizing Up Synthetic DNA Hacking Risks

Rami Puzis, head of the Ben-Gurion University Complex Networks Analysis Lab and a co-author of the study, tells ISMG that the researchers decided to examine potential cybersecurity issues involving the synthetic bioengineering supply chain for a number of reasons. "As with any new technology, the digital tools supporting synthetic biology are developed with effectiveness and ease of use as the primary considerations," he says. "Cybersecurity considerations usually come in much later when the technology is mature and is already being exploited by adversaries. We knew that there must be security gaps in the synthetic biology pipeline. They just need to be identified and closed." The attack scenario described by the study underscores the need to harden the synthetic DNA supply chain with protections against cyber biological threats, Puzis says. "To address these threats, we propose an improved screening algorithm that takes into account in vivo gene editing. We hope this paper sets the stage for robust, adversary resilient DNA sequence screening and cybersecurity-hardened synthetic gene production services when biosecurity screening will be enforced by local regulations worldwide."


Securing the Office of the Future

The vast majority of the things that we see every day are things that you never read about or hear about. It’s the proverbial iceberg diagram. That being said, in this interesting and very unique time that we are in, there is a commonality—and Sean’s actually already mentioned it once today—there are two major attack patterns that we’re seeing over and over, and these are not new things, they’re just very opportunistically preyed upon right now because of COVID and because of the remote work environment, but that’s ransomware and kind of spear phishing or regular old phishing attacks. Because people are at a distance and expected to be working virtually today and threat actors know that, so they’re getting better and better at laying booby traps, if you will, and e-mail to get people to click on attachments and other sorts of links. ... Coincidentally, or perhaps not coincidentally, one of the characters in our comic is called Phoebe the Phisher, and we were very deliberate about creating that character. She has a harpoon, of course, which is for, you know, whale phishing. She has a spear for targeted spear phishing, and she also has a, you know, phishing rod for kind of regular, you know, spray and pray kind of phishing.


How to maximize traffic visibility with virtual firewalls

The biggest advantage of a virtual firewall, however, is its support for the obvious dissolution of the enterprise perimeter. Even if an active edge DMZ is maintained through load balanced operation, every enterprise is experiencing the zero trust-based extension of their operation to more remote, virtual operation. Introducing support for virtual firewalls, even in traditional architectures, is thus an excellent forward-looking initiative. An additional consideration is that cloud-based functionality requires policy management for hosted workloads – and virtual firewalls are well-suited to such operation. Operating in a public, private, or hybrid virtual data center, virtual firewalls can protect traffic to and from hosted applications. This can include connections from the Internet, or from tenants located within the same data center enclave. One of the most important functions of any firewall – whether physical or virtual – involves the inspection of traffic for evidence of anomalies, breaches, or other policy violations. It is here that virtual firewalls have emerged as offering particularly attractive options for enterprise security teams building out their threat protection.


More than data

First of all, the system has to be told where to find the various clauses in a set of sample contracts. This can be easily done by marking the respective portions of text and labelling them with the clauses names they contain. On this basis we can train a classifier model that – when reading through a previously unseen contract – recognises what type of contract clause can be found in a certain text section. With a ‘conventional’ (i.e. not DL-based) algorithm, a small number of examples should be sufficient to generate an accurate classification model that is able to partition the complete contract text into the various clauses it contains. Once a clause is identified within a certain contract of the training data, a human can identify and label the interesting information items contained within. Since the text portion of one single clause is relatively small, only a few examples are required to come up with an extraction model for the items in one particular type of clause. Depending on the linguistic complexity and variability of the formulations used, this model can be either generated using ML, by writing extraction rules making use of keywords, or – in exceptionally complicated situations – by applying natural language processing algorithms digging deep into the syntactic structure of each sentence.



Quote for the day:

"You have achieved excellence as a leader when people will follow you everywhere if only out of curiosity." -- General Colin Powell

Daily Tech Digest - November 30, 2020

Pairing AI With Human Judgment Is Key To Avoiding 'Mount Stupid'

We are in the midst of what has been called the age of automation — a transformation of our economy, as robots, algorithms, AI and machines become integrated into everything we do. It would be a mistake to assume automation corrects for the Dunning-Kruger effect. Like humans, dumb bots and even smart AI often do not understand the limitations of their own competency. Machines are just as likely to scale Mount Stupid, and it can just as likely lead to disastrous decisions. But there is a fix for that. For humans, the fix is adding some humility to our decision making. For machines, it means creating flexible systems that are designed to make allowances and seamlessly handle outlier events — the unknowns. Having humans integrated into that system allows one to identify those potential automation failures. In automation, this is sometimes referred to as a human-in-the-loop system. Much like an autonomous vehicle, these systems keep improving as they acquire more input. It’s not rigid; if the autonomous vehicle encounters a piece of furniture in the road, a remote driver can step in to navigate around it in real-time while the AI or automation system learns from the actions taken by the remote driver. Human-in-the-loop systems are flexible and can seamlessly handle outlier events.


UK government ramps up efforts to regulate tech giants

Digital Secretary Oliver Dowden said: “There is growing consensus in the UK and abroad that the concentration of power among a small number of tech companies is curtailing growth of the sector, reducing innovation and having negative impacts on the people and businesses that rely on them. It’s time to address that and unleash a new age of tech growth.” While the Furman report found that there have been a number of efforts between the tech giants to support interoperability, giving consumers greater freedom and flexibility, these can be hampered by technical challenges and a lack of coordination. The report's authors wrote that, in some cases, lack of interoperability are due to misaligned incentives. “Email standards emerged due to co-operation but phone number portability only came about when it was required by regulators. Private efforts by digital platforms will be similarly hampered by misaligned incentives. Open Banking provides an instructive example of how policy intervention can overcome technical and coordination challenges and misaligned incentives.” In July, when the DMT was setup, law firm Osborne Clarke warn about the disruption to businesses increased regulations could bring.


Consumption of public cloud is way ahead of the ability to secure it

As the shift to working from home at the start of the year began, the old reliance on the VPN showed itself to be a potential bottleneck to employees being able to do what they are paid for. "I think the new mechanism that we've been sitting on -- everyone's been doing for 20 years around VPN as a way of segmentation -- and then the zero trust access model is relatively new, I think that mechanism is really intriguing because [it] is so extensible to so many different problems in use cases that VPN's didn't solve, and then other use cases that people didn't even consider because there was no mechanism to do it," Jefferson said. Going a step further, Eren thinks VPN usage between client and sites is on life support, but VPNs themselves are not going away. ... According to Jefferson, the new best practice is to push security controls as far out to the edge as possible, which undermines the role of traditional appliances like firewalls to be able to enforce security, and people are having to work out the best place for their controls in the new working environment. "I used to be pretty comfortable. This guy, he had 10,000 lines of code written on my Palo Alto or Cisco and every time we did a firewall refresh every 10 years, we had to worry about the 47,000 ACLs [access control list] on the firewall, and now that gets highly distributed," he said.


IOT & Distributed Ledger Technology Is Solving Digital Economy Challenges

DLTs can play an important function in driving data provenance yet ought to be utilized in conjunction with technologies, for example, hardware root of trust and immutable storage. Distributed ledger technology just keeps up a record of the transactions themselves, so if you have poor or fake information, it will simply disclose to you where that terrible information has been. All in all, DLTs alone don’t address software engineering’s trash in, trash out issue, yet offer impressive advantages when utilized in concert with technologies that ensure data integrity. Blockchain innovation vows to be the missing connection empowering peer-to-peer contractual behavior with no third party to “certify” the IoT transaction. It answers the challenge of scalability, single purpose of disappointment, time stamping, record, security, trust and reliability in a steady way. Blockchain innovation could give a basic infrastructure to two devices to straightforwardly move a piece of property, for example, cash or information between each other with a secured and reliable time-stamped contractual handshake. To empower message exchanges, IoT devices will use smart contracts which at that point model the understanding between the two gatherings.


Does small data provide sharper insights than big data?

Data Imbalance occurs when the number of data points for different classes is uneven. Imbalance in most machine learning models is not a problem, but imbalance is consequential in Small Data. One technique is to change the Loss Function by adjusting weights, another example of how AI models are not perfect. A very readable explanation of imbalance and its remedies can be found here.  Difficulty in Optimization is a fundamental problem since that is what machine learning is meant to do. Optimization starts with defining some kind of loss function/cost function. It ends with minimizing it using one or the other optimization routines, usually Gradient Descent, an iterative algorithm for finding a local minimum of a differentiable function (first-semester calculus, not magic). But if the dataset is weak, the technique may not optimize. The most popular remedy is Transfer Learning. As the name implies, transfer learning is a machine learning method where a model is reused to enhance another model. A simple explanation of transfer learning can be found here. I wanted to do #3 first, because #2 is the more compelling discussion about small data.


84% of global decision makers accelerating digital transformation plans

“New ways of working, initially broadly imposed by the global pandemic, are morphing into lasting models for the future,” said Mickey North Rizza, program vice president for IDC‘s Enterprise Applications and Digital Commerce research practice. “Permanent technology changes, underpinned by improved collaboration, include supporting hybrid work, accelerating cloud use, increasing automation, going contactless, adopting smaller TaskApps, and extending the partnership ecosystem. Enterprise application vendors need to assess their immediate and long-term strategies for delivering collaboration platforms in conjunction with their core software.” “If we’ve learned anything this year, it’s that the business environment can change almost overnight, and as business leaders we have to be able to reimagine our organizations and seize opportunities to secure sustainable competitive advantage,” said Mike Ettling, CEO, Unit4. “Our study shows what is possible with continued investment in innovation and a people-first, flexible enterprise applications strategy. As many countries go back into some form of lockdown, this people-centric focus is crucial if businesses are to survive the challenges of the coming months.”


How Apache Pulsar is Helping Iterable Scale its Customer Engagement Platform

Pulsar’s top layer consists of brokers, which accept messages from producers and send them to consumers, but do not store data. A single broker handles each topic partition, but the brokers can easily exchange topic ownership, as they do not store topic states. This makes it easy to add brokers to increase throughput and immediately take advantage of new brokers. This also enables Pulsar to handle broker failures. ... One of the most important functions of Iterable’s platform is to schedule and send marketing emails on behalf of Iterable’s customers. To do this, we publish messages to customer-specific queues, then have another service that handles the final rendering and sending of the message. These queues were the first thing we decided to migrate from RabbitMQ to Pulsar. We chose marketing message sends as our first Pulsar use case for two reasons. First, because sending incorporated some of our more complex RabbitMQ use cases. And second, because it represented a very large portion of our RabbitMQ usage. This was not the lowest risk use case; however, after extensive performance and scalability testing, we felt it was where Pulsar could add the most value.


Algorithmic transparency obligations needed in public sector

The review notes that bias can enter algorithmic decision-making systems in a number of ways. These include historical bias, in which data reflecting previously biased human decision-making or historical social inequalities is used to build the model; data selection bias, in which the data collection methods used mean it is not representative; and algorithmic design bias, in which the design of the algorithm itself leads to an introduction of bias. Bias can also enter the algorithmic decision-making process because of human error as, depending on how humans interpret or use the outputs of an algorithm, there is a risk of bias re-entering the process as they apply their own conscious or unconscious biases to the final decision. “There is also risk that bias can be amplified over time by feedback loops, as models are incrementally retrained on new data generated, either fully or partly, via use of earlier versions of the model in decision-making,” says the review. “For example, if a model predicting crime rates based on historical arrest data is used to prioritise police resources, then arrests in high-risk areas could increase further, reinforcing the imbalance.”


Regulation on data governance

The data governance regulation will ensure access to more data for the EU economy and society and provide for more control for citizens and companies over the data they generate. This will strengthen Europe’s digital sovereignty in the area of data. It will be easier for Europeans to allow the use of data related to them for the benefit of society, while ensuring full protection of their personal data. For example, people with rare or chronic diseases may want to give permission for their data to be used in order to improve treatments for those diseases. Through personal data spaces, which are novel personal information management tools and services, Europeans will gain more control over their data and decide on a detailed level who will get access to their data and for what purpose. Businesses, both small and large, will benefit from new business opportunities as well as from a reduction in costs for acquiring, integrating and processing data, from lower barriers to enter markets, and from a reduction in time-to-market for novel products and services. ... Member States will need to be technically equipped to ensure that privacy and confidentiality are fully respected. 


Why Vulnerable Code Is Shipped Knowingly

Even with a robust application security program, organizations will still deploy vulnerable code! The difference is that they will do so with a thorough and contextual understanding of the risks they're taking rather than allowing developers or engineering managers — who lack security expertise — to make that decision. Application security requires a constant triage of potential risks, involving prioritization decisions that allow development teams to mitigate risk while still meeting key deadlines for delivery. As application security has matured, no single testing technique has helped development teams mitigate all security risk. Teams typically employ multiple tools, often from multiple vendors, at various points in the SDLC. Usage varies, as do the tools that organizations deem most important, but most organizations end up utilizing a set of tools to satisfy their security needs. Lastly, while most organizations provide developers with some level of security training, more than 50% only do so annually or less often. This is simply not frequent or thorough enough to develop secure coding habits. While development managers are often responsible for this training, in many organizations, application security analysts carry the burden of performing remedial training for development teams or individual developers who have a track record of introducing too many security issues.



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." - Orrin Woodward