Daily Tech Digest - December 06, 2020

Ransomware Set for Evolution in Attack Capabilities in 2021

“The Maginot line of cybersecurity transformation failed as the first adopters were the e-crime groups and cybercrime cartels, and we just have to pay attention now as perimeter defenses have failed and continue to fail, and visibility and hardening has become an extreme challenge. Most attacks you see today are attacks from the inside out – digital insiders using trusted ecosystems to leverage ransomware attacks and espionage and crime campaigns.” Looking at ransomware in particular, the trio said they do not see this stopping or slowing down “and we continue to predict that this is going to extend significantly,” Foss said. He claimed ransomware groups have brought more people into their groups and are making sure they are getting trusted people, with nation state adversaries taking part as well. “We see this reaching out to additional operating systems; traditionally this has only impacted Windows primarily, but with MacOS having such a market reach in the professional ecosystem of most organizations, we predict it will be targeted as well,” Foss said. “Linux is one we have started to see more campaigns begin to target, and a lot are looking at defacing webpages in addition to taking over core components of ecosystems that these companies operate.”


Rethinking Robotic Process Automation (RPA)

You can't converse much with anyone these days about automation without talking RPA. It seems the little bots are getting everywhere. It's almost like an alien invasion! But always, the talk seems to be about creating and imposing bots on us. A bot for this and a bot for that, pretty soon you have dozens of little creatures (think about all the little gremlins in the film of the same name!) all nibbling away at pieces of your work. Helpful they maybe, but at what cost? In the UK and USA, as we came out of the 2008 financial crisis, economists were left scratching their heads. They were wrestling with what they call the productivity puzzle. Historically economic growth was always been closely tied to productivity, e.g., if output per worker does not grow, then the economy does not grow. In the UK, productivity was actually lower than before the crisis hit. So if productivity growth is required, it only stands to reason that tools to increase productivity are a useful thing to have. (I know I am oversimplifying, but I think it works for where we are going). What if RPA, instead of being about Robotic Process Automation instead became about Robotic Process Assistants. In this new world, we would each have just one robot on our desktop/laptop/machine, a little like Automator on a Mac.


Quantum Sensors Will Revolutionise The Tech Industry

Measurement devices that exploit quantum properties have been around for a while, such as atomic clocks, laser distance meters, and magnetic resonance imaging used for medical diagnosis. What can now be considered new is that individual quantum systems, like atoms and photons, are increasingly used as measurement probes. The entanglement and manipulation of quantum states are used to improve the sensitivity, even beyond the limit set by a conventional formulation of the quantum mechanical uncertainty principle. Yet, many scientists believe that quantum will enjoy its first real commercial success in sensing. That’s because sensing can avail the very characteristic that makes building a quantum computer so difficult-the extraordinary sensitivity of quantum states to the environment. Whether they respond to the gravitational pull of buried objects or picking up magnetic fields from the human brain, quantum sensors can recognize a wide range of tiny signals across the world. Some physicists believe that gravity-measuring quantum sensors, in particular, will become more widespread quickly with a potential market of USD 1 billion a year.


Banking to groceries — Data Protection Authority has multi-sector role, but must be efficient

First, the Data Protection Authority should follow a risk-based approach that is implicitly present in the Bill. For example, in many places, the Bill requires the DPA to consider the risk of harm to consumers while framing regulations. Additionally, the Bill categorises data into personal data, sensitive personal data, and critical personal data to differentiate the varying levels of risks that emanate from the misuse of data. Finally, the Bill creates a differential level of regulation between ordinary firms that use data, significant data fiduciaries, and small entities. These point to the fact that risk-based regulation must be inherent to the DPA’s strategic approach. Within this overall framework, the DPA can prioritise its resources by focusing on processing sensitive and critical personal data, and by overseeing significant data fiduciaries. This will allow the DPA to first build capacity in areas that pose the greatest threat to consumers, rather than expending its limited resources to regulate all sectors of economic activity. The DPA can further sharpen its focus by having a low threshold for exempting small entities. This will allow the DPA to focus its regulatory capacity towards firms that pose a larger risk to consumers by collecting and processing large volumes of data.


Australia’s Global RegTech Hub Poised for Growth

Like most businesses, local RegTechs have experienced disruption during the COVID-19 pandemic. The biggest challenge has been an immediate reduction in revenue. A contributing factor is the slowing of export opportunities, following travel restrictions and the postponement of trade events. Nonetheless, Australian RegTechs remain positive about future growth and continue to seek growth capital to fund product development, talent acquisition and market expansion. The pandemic has accelerated a shift towards remote working and digital interactions, increasing the risk of fraud and financial crime, and focusing organisations on the importance of robust cybersecurity. At the same time, Federal and State Governments are recognising the potential of RegTech to efficiently and effectively solve regulatory and compliance challenges, and to become a signature export for Australia. This, combined with regulatory pressure for all regulated entities across a range of industries to adopt RegTech, will create a strong platform for the sector to excel. ... Collectively, these actions will help Australian RegTechs to scale, creating local jobs, and supporting the export of Australian solutions.


Novel Online Shopping Malware Hides in Social-Media Buttons

The imposter buttons look just like the legitimate social-sharing buttons found on untold numbers of websites, and are unlikely to trigger any concern from website visitors, according to Sansec. Perhaps more interestingly, the malware’s operators also took great pains to make the code itself for the buttons to look as normal and harmless as possible, to avoid being flagged by security solutions. “While skimmers have added their malicious payload to benign files like images in the past, this is the first time that malicious code has been constructed as a perfectly valid image,” according to Sansec’s recent posting. “The malicious payload assumes the form of an html <svg> element, using the <path> element as a container for the payload. The payload itself is concealed utilizing syntax that strongly resembles correct use of the <svg> element.” To complete the illusion of the image being benign, the malicious payloads are named after legitimate companies. The researchers found at least six major names being used for the payloads to lend legitimacy: facebook_full; google_full; instagram_full; pinterest_full; twitter_full; and youtube_full. The result of all of this is that security scanners can no longer find malware just by testing for valid syntax.


Embedding Trust at the Core of Critical Infrastructure

Technology is no longer an extension of critical infrastructure, but rather at the core of it. The network sits between critical data, assets, and systems, and the users and services that leverage or operate them. It is uniquely positioned not only to add essential visibility and controls for resiliency, but also a well-placed and high-value target for attackers. Resiliency of the network infrastructure itself is crucial. Resilience is only achieved by building in steps to verify integrity with technical features embedded in hardware and software. Secure boot ensures a network device boots using only software that is trusted by the Original Equipment Manufacturer. Image signing allows a user to add a digital fingerprint to an image to verify that the software running on the network has not been modified. Runtime defenses protect against the injection of malicious code into running network software, making it very difficult for attackers to exploit known vulnerabilities in software and hardware configurations. Equally important, vendors must use a Secure Development Lifecycle to enhance security, reduce vulnerabilities, and promote consistent security policy across solutions. All of this might sound like geek mumbo-jumbo, but these are non-negotiables in today’s world. 


Out on the edge: The new cloud battleground isn’t in the cloud at all

The big cloud providers are all pursuing similar paths to the edge, anchored by the on-premises versions of their cloud infrastructure that have started rolling out this year. AWS’ Outposts, which was built for use within customer data centers, is also the foundation for AWS Local Zones and AWS Wavelength, which are miniature versions of the cloud giant’s technology stack that live in small, local data centers and telecommunications carriers’ point-of-presence facilities. The company says the experience it gained building out its retail e-commerce business lends itself perfectly to edge computing. “We already have more IoT devices connected to the cloud than any other cloud provider by a large margin. We have to do that for ourselves,” ‘said AWS’ Vass. Customers can employ such Amazon inventions as AWS Greengrass for IoT devices, AWS Snowball for storage and AWS Robomaker for development of robotic devices using Lambda serverless functions “on a POP, in a Local Zone and in the cloud, manage it all centrally and do decentralized execution,” he said. Microsoft’s Azure cloud edge strategy uses a similar approach. Edge Zones, which the company rolled out early this year, are essentially scaled-down Azure data centers located within miles of a customer. 


Is RPA the same as AI? What’s the Difference, and What Are the Use Cases?

RPA uses software robots to automate human actions in business processes that involve interaction with digital systems. These actions are usually simple and repetitive, which makes them prone to human error and can provoke a loss of employees’ motivation and efficiency. Software robots and RPA on the other hand bring notable benefits: accuracy (by minimizing human error), reliability (by being always available and by reducing delay), traceability (by providing audit trails and logs), and productivity (by increasing processing speed). A few examples of use cases are automating orders, processing payroll, customer onboarding, data validation, etc. ... Artificial intelligence “combines the human capacities for learning, perception, and interaction [...] at a level of complexity [and automation] that ultimately supersedes our own abilities.” It is a spectrum of technologies (e.g., natural language processing, computer vision, predictive modeling, data clustering, and many more) that opens new use cases for businesses, as well as reduces entry cost for many existing business problems that still require too much human intervention. ... In order to tackle these use cases and leverage the benefits of AI in business, using a data science and machine learning platform is a best practice. — it is the key to successfully scaling AI projects and to bringing a robust data methodology to all levels of the business.


When Is It Time to Retire Your Legacy System and Go Cloud?

When your tried-and-tested technology becomes unwieldy and impacts your bottom line, upgrading is critical to fit the business. Let's say you're a construction company that uses an obsolete legacy proof-of-delivery (PoD) system. The system requires three full-time customer service specialists to manage the application (e.g., find the right documents, send them over to customers, work with invoices, and so on). Due to the use of old-school tech, making a single change or adding a new feature is costly and time-consuming. On the other hand, the risk of human error is high and can result in unhappy customers, overheads, and delayed payments. Furthermore, customers call you to request their PoD, and the number of monthly calls now exceeds 1,000 and requires a lot of manual labor. This is a telltale sign that your traditional processes aren't effective which badly affects your entire business. Creating a Cloud-based and easy-to-use PoD portal would ensure maximum automation of all relevant processes, elimination of customer calls or their reduction to the minimum, and significant time- and cost-saving and increased efficiency.



Quote for the day:

"Anger and intolerance are the enemies of correct understanding." -- Mahatma Gandhi

Daily Tech Digest - December 05, 2020

What Tech Jargon Reveals about Bias in the Industry

Tech language was developed back in the early days of modern computing during a time when globally racism was much more explicit and often went unchallenged. But there is no reason we can’t change that language. It’s not embedded in the code itself; it’s just how we talk about these concepts. I recently heard of an example where a team of coders working on a solution had to go through the “blacklist and whitelist” of terms/commands for a specific product. The “blacklist” was terms/commands they couldn’t or shouldn’t use while the “whitelist” was stuff that’s OK. Because of the Black Lives Matter movement and what’s in the news, they noticed these terms in a new light for the first time and changed the language they were using to avoid using those racialized terms. It’s easy to just use different words, so why not? It’s an easy low-cost, low-tech solution to change language and improve output. Recently, Microsoft removed terms like these from their documentation. Cloudflare is debiasing some of the terms used in their coding. There are no reasons why such simple conscious actions can’t be undertaken for the benefit of us all. The benefits of diversity are widely stated. But they’re actually only available to companies when they include people. 


How to make remote pair programming work

When you cannot assemble a team physically, turn to pair programming remotely. But to see the benefits of remote pair programming, approach the practice systematically with one of the following styles: unstructured, driver/navigator or ping-pong. Plan pair programming remotely with decisions about the team's skill level: Should novices work with experts, or is a different approach better? Editor's note: Interest in remote pair programming has risen during the global COVID-19 pandemic. Developers working in distributed, at-home settings for the first time should also check out tips to empower productive remote dev teams, and insights into psychological safety when the workplace is suddenly part of home life. ... Most pair programming relationships fall into the unstructured style, where two programmers work together in an ad hoc manner. With the collaboration being loosely guided, both programmers should have matching skill levels. A common variant of this style is the unstructured expert-novice pair, where an expert programmer guides a novice. An unstructured approach is hard to discipline and unlikely to persist on longer projects. Unstructured pair programming is also harder to sustain remotely than styles with established guidelines.


Enterprise Architecture: What It Is, Why It's Important And How To Talk About It

The enterprise architect’s ultimate goal is to enact effective and measurable change. To do so, architects work to create not just a complete picture of the organization, but also roadmaps that represent different desired future states. By mapping out the paths to desired future states, they can decide the best path to take—with metrics to back up that decision showing how much better the organization will operate once changes are made. With precise understanding of the tradeoffs that come with each potential scenario, architects can propose multiple solutions in line with changing strategies. These scenarios can be optimized for different business outcomes, like growth, cost optimization, risk reduction, etc., and ultimately drive important business decisions that can be confidently backed with data. Modern enterprise architecture tools go far beyond the old-school perceptions of EA as a simple visualization tool, and now include dynamic and collaborative data that supports the different ways to model future states. One example of enterprise architecture in action comes from New Zealand’s largest retail grocery organization, Foodstuffs, which implemented enterprise architecture to help it stay agile and competitive.


Intel details Horse Ridge II as helping overcome quantum computing hurdle

Horse Ridge II, Intel says, supports "enhanced capabilities and higher levels of integration for elegant control of the quantum system". New features include the ability to manipulate and read qubit states and control the potential of several gates required to entangle multiple qubits. Horse Ridge II builds on the first-generation system-on-chip (SoC) ability to generate radio frequency pulses to manipulate the state of the qubit, known as qubit drive. "With Horse Ridge I, we essentially were able to drive the qubit, basically apply signals that would manipulate the state of the qubit between 0-1; with Horse Ridge II, we can not only drive the qubit, but we can read out the state of the qubit, we can apply pulses that would allow us to control the interaction between two qubits, and so we've added additional controller capabilities to Horse Ridge II," Clarke said. "We have a very programmable filter that would allow us to send a variety of different pulse shapes to control our qubits, we have an integrated microcontroller, we have a lot of DACs -- digital to analogue controllers -- that would allow us to control the individual qubits to a greater extent and these DACs would otherwise be discrete boxes in a control rack external to the refrigerator, so we're starting to take some of these boxes and put them into our SoC inside of our qubit refrigerator."


What organisations should expect next in the evolution of data

Before Covid-19 hit, data was already becoming fundamental to organisations’ future success. That journey has been supercharged. According to new research from Druva, 79% of IT decision makers in the US and UK now see data management and protection as key to competitive advantage. Similarly, 73% say they rely more heavily on data for business decisions, while 33% believe its value has permanently increased since the pandemic began. Therefore, if the message for IT leaders on their data strategy pre-pandemic was ‘get moving’, in 2021 it will be ‘go faster’. As the move towards a digitally-led future gathers pace, we’ll see a growing number of organisations move to make data a pervasive part of everything, from operational decision-making to customer experiences. Rapid availability and analysis will be vital. That’s not to say this transformation comes without risk. The same Druva research found 73% of IT decision makers have become more concerned about protecting their data from ransomware, and rightly so. Many report a year-on-year increase in phishing, malware and ransomware attacks. With large numbers of people working outside the office and some high-profile recent successes for cyber criminals, we can expect this threat to grow further in 2021.


Blockchain Attempts To Secure The Supply Chain

Counterfeiting is a real and growing problem. “We have several customers who are very concerned about counterfeiting and other security issues, and they are thinking of multiple ways to secure their ICs and systems,” said Geoff Tate, CEO of Flex Logix. This is partly the role of identity, but identity may not be sufficient without the further knowledge of the history of the item. And that history can involve an enormous range of considerations. How much to include must balance the cost of tracking and storing data about huge numbers of individual components and systems against the consequences of having too little historical information. “Blockchains provide a convenient means to permanently record transactions, and they have application to the provenance of components,” said John Hallman, product manager for trust and security at OneSpin Solutions. Dave Huntley, business development at PDF Solutions and co-chair of three SEMI committees/task forces, elaborated further. “When a new asset like a package is assembled, it is enrolled as a brand-new asset on the blockchain, along with its bill of materials,” he said. “You now have a genealogy, and you could take a module from a car, open it up, figure out the printed circuit board and slide it out, open that up, look at the packages inside, open one of them up, and look at the die inside. ...”


Building real cyber resiliency in government

While threats are constantly evolving, Branko Bokan, a cybersecurity specialist at CISA, said the tactics, techniques and procedures are actually the same -- the real change is in the distribution type and frequency of these attacks. “Regardless of how well we try to prevent cyberattacks, they will always happen, and we have to be ready and able to detect bad things when they happen, or as soon as possible after they happen,” he said. Often, organizations think of cybersecurity as preventing/protecting networks against cyber threats – but that is just one element of the cybersecurity framework, as outlined by the National Institute of Standards and Technology. NIST framework includes five functions, which match the pillars for cyber resiliency: identify, protect, prevent, respond and recover. By dividing cybersecurity into these five stages, agencies can identify cyber actions adversaries might take. It can also help them create a coverage map of the threat landscape to see how their current capabilities can protect, detect and respond to each one of these actual threat actions – and identify where the gaps are. As agencies take a threat-based approach to security, cloud is also playing a large role in resiliency plans.


Microsoft Cloud Security Exec Talks New Tech, WFH, Gamification

From a cloud operator's perspective, Ollmann is seeing the growth of cloud security posture management (CSPM) technologies, which are meant to help security teams bring together their assets and resources in one place to better manage and understand their cloud infrastructure.  "CSPM has been that vehicle for providing visibility of security risk, vulnerabilities, vulnerability management, and then a little bit of gamification to enable and help customers and organizations improve their security posture as they go along," he explained. Security posture management gives infosec teams visibility and control while managing policies. The gamification – a "loose interpretation" of the term, Ollmann noted – is in the score, which informs teams of the risk or security value in a particular asset, resource, application, or environment as a whole. Every vulnerability and poor or absent configuration has a value tied to it. By addressing the weaknesses, a team can increase its overall security score. "Security will never be 100%, so hopefully as you develop these sorts of things, you keep improving on your score," he said. Some larger businesses have multiple apps in multiple environments, and teams compete against one another to boost their numbers.


The resurgence of enterprise architecture

Because enterprise architecture enables a business to map out all their systems and processes and how they connect together, EA is becoming a “very important method and tool to drive forward digital transformation,” said Christ. He explained that since most transformations don’t start off as greenfield projects, about 70% of them fail due to their existing IT landscape. Having a solid baseline, which EA aims to provide, is crucial for any transformation initiative.  “The reason for this is that once you’ve started a transformation program, you discover new dependencies because of applications connected to other systems that you never knew of before. So replacing them with better applications, with newer interfaces, and with better APIs all of a sudden isn’t as easy as you thought when you were starting the transformation program,” he explained.  Businesses also want to understand where their investments in the IT landscape are going, and connect the business strategic goals to the activities in their transformation program. “This is where enterprise architecture can help you. It allows you to look at this whole hierarchy of objectives and programs you are setting up, the affected applications you are having, and the underlying changes in detail,” said Christ.


Quantum Acceleration in 2020

Many frameworks and tools have emerged for developing quantum applications based on these algorithms. Microsoft’s Quantum Development Kit (QDK), for example, provides a tool set integrated with leading development environments, open-source resources, and the company’s high-level programming language, Q#. It also offers access to quantum inspired optimization (QIO) solvers for running optimization problems in the cloud. For building quantum circuits and algorithms that take advantage of quantum processors, IBM offers Qiskit, an open-source quantum computing library for Python. Cirq is yet another quantum programming library created by the team of scientists and engineers at Google. It contains a growing set of functionalities allowing users to manipulate and simulate quantum circuits. Finally, Quil is a quantum programming toolkit from Rigetti that also provides a diverse array of functionalities and data structures for supporting quantum computation. There are also packages, such as Xanadu’s Strawberry Fields and D-Wave's Leap, aimed at quantum backends that are not based on the gate model paradigm. In addition, we see the ongoing creation of domain-specific tools, such as OpenFermion and Xanadu’s PennyLane, purpose-built for running quantum chemistry and quantum machine learning applications, respectively.



Quote for the day:

"The only person you are destined to become is the person you decide to be." -- Ralph Waldo Emerson

Daily Tech Digest - December 04, 2020

The evolving role of operations in DevOps

To better understand how DevOps changes the responsibilities of operations teams, it will help to recap the traditional, pre-DevOps role of operations. Let’s take a look at a typical organization’s software lifecycle: before DevOps, developers package an application with documentation, and then ship it to a QA team. The QA teams install and test the application, and then hand off to production operations teams. The operations teams are then responsible for deploying and managing the software with little-to-no direct interaction with the development teams. These dev-to-ops handoffs are typically one-way, often limited to a few scheduled times in an application’s release cycle. Once in production, the operations team is then responsible for managing the service’s stability and uptime, as well as the infrastructure that hosts the code. If there are bugs in the code, the virtual assembly line of dev-to-qa-to-prod is revisited with a patch, with each team waiting on the other for next steps. This model typically requires pre-existing infrastructure that needs to be maintained, and comes with significant overhead. While many businesses continue to remain competitive with this model, the faster, more collaborative way of bridging the gap between development and operations is finding wide adoption in the form of DevOps.


Monitoring Microservices the Right Way

The common practice by StatsD and other traditional solutions was to collect metrics in push mode, which required explicitly configuring each component and third-party tool with the metrics collector destination. With the many frameworks and languages involved in modern systems it has become challenging to maintain this explicit push-mode sending of metrics. Adding Kubernetes to the mix increased the complexity even further. Teams were looking to offload the work of collecting metrics. This was a distinct strongpoint of Prometheus, which offered a pull-mode scraping, together with service discovery of the components ("targets" in Prometheus terms). In particular, Prometheus shined with its native scraping from Kubernetes, and as Kubernetes’s demand skyrocketed so did Prometheus. As the popularity of Prometheus grew, many open source projects added support for the Prometheus Metrics Exporter format, which has made metrics scraping with Prometheus even more seamless. Today you can find Prometheus exporters for many common systems including popular databases, messaging systems, web servers, or hardware components.


Will Blockchain Replace Clearinghouses? A Case Of DVP Post-Trade Settlement

Blockchain technology can improve settlement processes substantially. First, using a blockchain makes it possible to decrease counterparty risk as it enables a trustless settlement process that is similar to DVP settlement in a way that the delivery of an asset is directly linked to the instantaneous payment for the asset.  Therefore, atomic swaps enable direct “barter” operations when one tokenized asset is directly exchanged for another tokenized asset (delivery versus delivery). Here, “directly exchanged” means that the technology guarantees that both transfers have to happen. It is technologically not possible that only one transfer is executed if the other transfer is interrupted for whatever reason. Besides, if a blockchain is used for settlement, a third party intermediary that helps to facilitate settlement in the case of a conventional, not DLT-based DVP is no longer necessary. This implies peer-to-peer settlement that leads to substantial cost savings for the settlement. In addition, cross-chain atomic swaps cover more complex cases such as trustless settlement among more than two parties.


Cloud native security: A maturing and expanding arena

Along with the usual array of preventative controls that are deployed as part of a cloud native platform, companies need to focus on detection and response to breaches. It’s important to note that the usual toolsets that are put in place will need to be supplemented by cloud native tools that can provide targeted visibility into container-based workflows. Projects like Falco, which can integrate with container workloads at a low level, are an important part of this. Additionally, companies should make sure to properly use the facilities that Kubernetes provides. For example, Kubernetes audit logging is rarely enabled by default, but it’s an important control for any production cluster. A key takeaway for container security deployments is the importance of getting security controls in place before workloads are placed into production. Ensuring that developers are making use of Kubernetes features like Security Contexts to harden their deployments will make the deployment of mandatory controls much easier. Also ensuring that a “least privilege” initial approach is taken to network traffic in a cluster can help avoid the “hard shell, soft inside” approach to security that allows attackers to easily expand their access after an initial compromise has occurred.


Cloud computing in the real world: The challenges and opportunities of multicloud

In an ideal world, application workloads -- whatever their heritage -- should be able to move seamlessly between, or be shared among, cloud service providers (CSPs), alighting wherever the optimal combination of performance, functionality, cost, security, compliance, availability, resilience, and so on, is to be found -- while avoiding the dreaded 'vendor lock-in'. "Businesses taking a multicloud approach can cherry-pick the solutions that best meet their business needs as soon as they become available, rather than having to wait for one vendor to catch up," John Abel, technical director, office of the CTO, Google Cloud, told ZDNet. "Avoiding vendor lock in, increased agility, more efficient costs and the promise of each provider's best solutions are all too great to ignore." That's certainly the view taken by many respondents to the survey underpinning the 2020 State of Multicloud report from application resource management company Turbonomic. ... "Bottom-line, cultural change is a prerequisite for managing the complexity of today's hybrid and multicloud environments. Teams must operate faster, dynamically adapting to shifting market trends to stay competitive.


An Architect's guide to APIs: REST, GraphQL, and gRPC

The benefit of taking an API-based approach to application architecture design is that it allows a wide variety of physical client devices and application types to interact with the given application. One API can be used not only for PC-based computing but also for cellphones and IoT devices. Communication is not limited to interactions between humans and applications. With the rise of machine learning and artificial intelligence, service-to-service interaction facilitated by APIs will emerge as the Internet's principal activity. APIs bring a new dimension to architectural design. However, while network communication and data structures have become more conventional over time, there is still variety among API formats. There is no "one ring to rule them all." Instead, there are many API formats, with the most popular being REST, GraphQL, and gRPC. Thus a reasonable question to ask is, as an Enterprise Architect, how do I pick the best API format to meet the need at hand? The answer is that it's a matter of understanding the benefits and limitations of the given format.


Unlock the Power of Omnichannel Retail at the Edge

The Edge exists wherever the digital world and physical world intersect, and data is securely collected, generated, and processed to create new value. According to Gartner, by 2025, 75 percent6 of data will be processed at the Edge. For retailers, Edge technology means real-time data collection, analytics and automated responses where they matter most — on the shop floor, be that physical or virtual. And for today’s retailers, it’s what happens when Edge computing is combined with Computer Vision and AI that is most powerful and exciting, as it creates the many opportunities of omnichannel shopping. With Computer Vision, retailers enter a world of powerful sensor-enabled cameras that can see much more than the human eye. Combined with Edge analytics and AI, Computer Vision can enable retailers to monitor, interpret, and act in real-time across all areas of the retail environment. This type of vision has obvious implications for security, but for retailers it also opens up huge possibilities in understanding shopping behavior and implementing rapid responses. For example, understanding how customers flow through the store, and at what times of the day, can allow the retailer to put more important items directly in their paths to be more visible.


Hacking Group Used Crypto Miners as Distraction Technique

The use of the monero miners helped the hacking group establish persistence within targeted networks and enabled them to deploy other spy tools and malware without raising suspicion. That's because cryptocurrency miners are usually low-level security priorities for most organizations, according to Microsoft. "Cryptocurrency miners are typically associated with cybercriminal operations, not sophisticated nation-state actor activity," the Microsoft report notes. "They are not the most sophisticated type of threats, which also means that they are not among the most critical security issues that defenders address with urgency. Recent campaigns from the nation-state actor Bismuth take advantage of the low-priority alerts coin miners cause to try and fly under the radar and establish persistence." The Microsoft report also notes: "While this actor's operational goals remained the same - establish continuous monitoring and espionage, exfiltrating useful information as it surfaced - their deployment of coin miners in their recent campaigns provided another way for the attackers to monetize compromised networks."


Cypress vs. Selenium: Compare test automation frameworks

Selenium suits applications that don't have many complex front-end components. Selenium's support for multiple languages makes it a good choice as the test automation framework for development projects that aren't in JavaScript. Selenium is open source, has ample documentation and is well supported by many other open source tools. Also, when a project calls for behavior-driven development (BDD), organizations find Selenium fits the approach well, as many libraries, like Cucumber or Capybara, make writing tests within BDD structured and implementable. Cypress is a great tool to automate JavaScript application testing. And that's a large group, as JavaScript is the language of choice for many modern web applications. Cypress integrates well with the client side and asynchronous design of these applications, as it natively ties into the web browser. Thus, test scripts run much quicker and more reliably than they would for the same application tested with Selenium for automation. Cypress might be better suited for a testing team with programming experience, as JavaScript is a complex single-threaded, non-blocking, asynchronous, concurrent language.


The Complexity of Product Management and Product Ownership

An issue for organisational leaders considering how to design for product flow is that when someone says product ownership or product management we are immediately uncertain which of many possible definitions the person is referring to. This level of ambiguity is a constant struggle in the software world. Agile, DevOps, and Digital are all now terms which are the subject of confusion and passionate neverending debates. Product ownership/management has now joined them. Kent Beck described a similar issue in software teams when everyone has slightly different concepts in their minds when describing system components. He called this the problem of metaphor and prescribed it as a key practice in eXtreme programming. We need to take this practice of System Metaphor to our wider discussions as product delivery groups if we are going to resolve bigger issues. To help consider some of the Metaphor surrounding the product owner function I highly recommend the blog by Roman Pilcher (an author on product management) He does a good job of creating metaphors for the key variations in product management roles.



Quote for the day:

"Real generosity towards the future lies in giving all to the present." -- Camus

Daily Tech Digest - December 03, 2020

The Service Factory of the Future

The service factory of the future will break the compromise between personalization and industrialization by leveraging standard service bits: small elements of service, such as a chatbot or an online shopping cart. Service bits will increasingly consist of “microservices”—digitized service offerings or processes—that are accessed through APIs and either created in-house or procured from ecosystem partners. Bits can also be automated or manual service activities based on legacy IT systems. By flexibly combining service bits, the service factory of the future will be able to create hyperpersonalized offerings and packages tailored to an individual’s needs, preferences, and habits on the basis of a wide range of customer data. Migration to the service factory of the future requires transformative change in five critical dimensions: customer experience, service delivery, digital technology, people and organization, and digital ecosystems. ... The service factory of the future will enable providers to be predictive, preventive, and proactive. It will anticipate customers’ needs and approach them with solutions and hyperpersonalized experiences. More important, it will develop capabilities to prevent service lapses from occurring in the first place.


FBI: BEC Scams Are Using Email Auto-Forwarding

The first was detected in August when fraudsters used the email forwarding feature in the compromised accounts of a U.S.-based medical company. The attackers then posed as an international vendor and tricked the victim to make a fraudulent payment of $175,000, according to the alert. Because the targeted organization did not sync its webmail with its desktop application, it was not able to detect the malicious activity, the FBI notes. In a second case in August, the FBI found fraudsters created three forwarding rules within a compromised email account. "The first rule auto-forwarded any email with the search terms 'bank,' 'payment,' 'invoice,' 'wire,' or 'check' to cybercriminals' email accounts," the alert notes. "The other two rules were based on the sender's domain and again forwarded to the same email addresses." Chris Morales, head of security analytics at security firm Vectra AI, says that in addition to reaping fraudulent payments, fraudsters can use email-forwarding to plant malware or malicious links in documents to circumvent prevention controls or to steal data and hold it for ransom. In in a keynote presentation at Group-IB's CyberCrimeCon 2020 virtual conference in November, Craig Jones, director of cybercrime at Interpol, noted that BEC scammers are among the threat actors that are retooling their attacks to take advantage of the COVID-19 pandemic.


Robots Can Now Have Tunable Flexibility & Improved Performance

Generally, the mechanics of obliging inflexibility variances can be massive with ostensible territory, while curved origami can minimalistically uphold an extended stiffness scale with on-demand flexibility. The structures shrouded in Jiang and team’s research consolidate the collapsing energy at the origami wrinkles with the bending of the panel, tuned by switching among numerous curved creases between two points. Curved origami empowers a single robot to achieve a variety of movements. A pneumatic, swimming robot created by the team can achieve a scope of nine distinct movements, including quick, medium, slow, straight and rotational developments, by essentially changing which creases are utilized. The team’s exploration centered around joining the folding energy at origami creases with the board bending, which is tuned by moving along various creases between two points. With curved origami, a single robot is equipped for undertaking different movements. For instance, the team built up a swimming robot that had nine unique movements, for example, quick, slow, medium, straight, and rotational. To achieve any of these, the creases simply should be changed.


Migrating a Monolith towards Microservices with the Strangler Fig Pattern

One of the few benefits of the Zope framework is the fragile nature of the software has forced us to work in small increments, and ship in frequent small releases. Having unreleased code laying around for more than a few hours has led to incidents around deployment, like accidental releases or code being overwritten. So the philosophy has been "write it and ship it immediately". Things like feature toggles and atomic releases were second nature. Therefore, when we designed the wrapper and the new service architectures, feature toggles were baked in from the start (if a little crude in the first cuts). Therefore, from the early days of the project code was being pushed to live within hours of being committed. Moving to a framework like Flask enabled "proper" CI pipelines, which can perform actual checks on the code. Whilst a deployment into production is manually initiated, all other environment builds and deployment are initiated by a commit into a branch. The aim is to keep the release cadence the same as it has been with Zope. Changes are small, with multiple small deployments a day rather than massive "releases". We then use feature toggles to enable functionality in production.


Misconfigured Docker Servers Under Attack by Xanthe Malware

“Once all possible keys have been found, the script proceeds with finding known hosts, TCP ports and usernames used to connect to those hosts,” said researchers. “Finally, a loop is entered which iterates over the combination of all known usernames, hosts, keys and ports in an attempt to connect, authenticate on the remote host and launch the command lines to download and execute the main module on the remote system.” Misconfigured Docker servers are another way that Xanthe spreads. Researchers said that Docker installations can be easily misconfigured and the Docker daemon exposed to external networks with a minimal level of security. Various past campaigns have been spotted taking advantage of such misconfigured Docker installations; for instance, in September, the TeamTNT cybercrime gang was spotted attacking Docker and Kubernetes cloud instances by abusing a legitimate cloud-monitoring tool called Weave Scope. In April, an organized, self-propagating cryptomining campaign was found targeting misconfigured open Docker Daemon API ports; and in October 2019, more than 2,000 unsecured Docker Engine (Community Edition) hosts were found to be infected by a cyptojacking worm dubbed Graboid.


Finding rogue devices in your network using Nmap

Just knowing what ports are open is not enough, as many times, these services may be listening on non-standard ports. You will also want to know what software and version are behind the port from a security perspective. Thanks to Nmap's Service and Version Detection capabilities, it is possible to perform a complete network inventory and host and device discovery, checking every single port per device or host and determining what software is behind each. Nmap connects to and interrogates each open port, using detection probes that the software may understand. By doing this, Nmap can provide a detailed assessment of what is out there rather than just meaningless open ports. ... Rogue DHCP servers are just like regular DHCP servers, but they are not managed by the IT or network staff. These rogue servers usually appear when users knowingly or unknowingly connect a router to the network. Another possibility is a compromised IoT device such as mobile phones, printers, cameras, tablets, smartwatches, or something worse, such as a compromised IT application or resource. Rogue DHCP servers are frustrating, especially if you are trying to deploy a fleet of servers using PXE, as PXE depends heavily on DHCP. 


Digital transformation, innovation and growth is accelerated by automation

Automation is a key digital transformation trend for 2021 and beyond. Here are some key findings regarding the importance of process automation. According to Salesforce, 81% of IT organizations will automate more tasks to allow team members to focus on innovation over the next 12-18 months. McKinsey notes that 57% of organizations say they are at least piloting automation of processes in one or more business units or functions. And 31% of IT decision makers say that automation is a key business initiative tied to digital transformation, per MuleSoft. Integration continues to be a challenge for process automation. Sixty percent of line of business users agree that an inability to connect systems, applications, and data hinders automation initiatives. The future of automation is declarative programming. "In 2021, we'll see more and more systems be intent-based, and see a new programming model take hold: a declarative one. In this model, we declare an intent - a desired goal or end state - and the software systems connected via APIs in an application network autonomously figure out how to simply make it so," said Uri Sarid, CTO, MuleSoft. McKinsey estimates that automation could raise productivity in the global economy by up to 1.4% annually. 


Why microlearning is the key to cybersecurity education

Most organizations are used to relatively “static” training. For example: fire safety is fairly simple – everyone knows where the closest exit is and how to escape the building. Worker safety training is also very stagnant: wear a yellow safety vest and a hard hat, make sure to have steel toed shoes on a job site, etc. The core messages for most trainings don’t evolve and change. That’s not the case with cybersecurity education and training: attacks are ever-changing, they differ based on the targeted demographic, current affairs, and the environment we are living in. Cybersecurity education must be closely tied to the value and mission of an organization. It must also be adaptable and evolve with the changing times. Microlearning and gamification are new ways to help encourage and promote consistent cybersecurity learning. This is especially important because of the changing demographics: there are currently more millennials in the workforce than baby boomers, but the training methods have not altered dramatically in the last 30 years. Today’s employee is younger, more tech-savvy and socially connected. Modern training needs to acknowledge and utilize that.


Cut IT Waste Before IT Jobs

While it is impossible to fully correlate the impact of ITAM on job retention, we can illustrate the opportunity with some simple sums. Starting with Gartner’s latest Worldwide IT Spending Forecast, the total spend next year on Data Center Systems, Enterprise Software, and Devices (the three areas of IT spend that ITAM can address) will be $1.35 trillion. If ITAM can reduce this spending by just 5% (which we have already said is a very conservative estimate for the industry), that alone equates to over $67.7 billion of potential savings from ITAM alone. If just some of these savings were applied toward talent retention, they could protect hundreds of thousands of jobs around the world. Before IT departments slash critical projects or lay off staff, we urge them to look at their IT spend first to see where savings could be made. Remember that cutting IT jobs doesn’t just reduce the bottom line, it means the removal of talent, careers and institutional knowledge -- in comparison to IT waste, which is removing unused or unwanted resources with no impact whatsoever on delivery of services. What’s more, with many IT purchases having been rushed through during the March/April period to support home working, there is a high likelihood of “bloatware” across organizations that could yield higher than average savings than you would typically expect in an ITAM project.


Covid-19 vaccine supply chain attacked by unknown nation state

The X-Force team said its analysis pointed to a “calculated operation” starting in September, spanning six countries and targeting organisations associated with international vaccine alliance Gavi’s Cold Chain Equipment Optimisation Platform (CCEOP). It was unable to precisely attribute the campaign, but said that both precision targeting of key executives at relevant organisations bore the “potential hallmarks of nation-state tradecraft”. IBM senior strategic cyber threat analyst Claire Zaboeva wrote: “While attribution is currently unknown, the precision targeting and nature of the specific targeted organisations potentially point to nation-state activity. “Without a clear path to a cash-out, cyber criminals are unlikely to devote the time and resources required to execute such a calculated operation with so many interlinked and globally distributed targets. Likewise, insight into the transport of a vaccine may present a hot black-market commodity. ...” According to IBM X-Force, the attacker has been impersonating an executive at Haier Biomedical, a cold chain specialist, to target organisations including the European Commission’s Directorate General for Taxation and Customs Union, and companies in the energy, manufacturing, website creation and software and internet security sectors.



Quote for the day:

"Every great leader can take you back to a defining moment when they decided to lead." -- John Paul Warren

Daily Tech Digest - December 02, 2020

Establish AI Governance, Not Best Intentions, to Keep Companies Honest

Transparency is necessary to adapt analytic models to rapidly changing environments without introducing bias. The pandemic’s seesawing epidemiologic and economic conditions are a textbook example. Without an auditable, immutable system of record, companies have to either guess or pray that their AI models still perform accurately.  This is of critical importance as, say, credit card holders request credit limit increases to weather unemployment. Lenders want to extend as much additional credit as prudently possible, but to do so, they must feel secure that the models assisting such decisions can still be trusted. Instead of ferreting through emails and directories or hunting down the data scientist who built the model, the bank’s existing staff can quickly consult an immutable system of record that documents all model tests, development decisions and outcomes. They can see what the credit origination model is sensitive to, determine if features are now becoming biased in the COVID environment, and build mitigation strategies based on the model’s audit investigation. Responsibility is a heavy mantle to bear, but our societal climate underscores the need for companies to use AI technology with deep sensitivity to its impact. 


The three stages of security risk reprioritization

As organizations currently undergo planning and budget allocation for 2021, they are looking to invest in more permanent solutions. IT teams are trying to understand how they can best invest in solutions that will ensure a strong security posture. There’s also a greater importance in starting to understand the greater need for complete visibility into the endpoint, even as devices are operating on remote networks. Policies are being created around how much work should actually be done on a VPN and by default creating more forward-looking permanent policies and technology solutions. But as security teams embrace new tools for security and operations to enable continuity efforts, it also generates new attack vectors. COVID-19 has presented the opportunity for the IT community to evaluate what can and can’t be trusted, even when operating under Zero Trust architectures. For example, some of the technologies, like VPN, can undermine what they were designed for. At the beginning of the pandemic, CISA issued a warning around the continued exploitation of specific VPN vulnerabilities. 


Updates To The Open FAIR Body Of Knowledge Part 2

The Open FAIR BoK Update Project Working Group made a deliberate effort to more logically present information in O-RA. In Section 4: Risk Measurement: Modeling and Estimate, the ideas of accuracy and precision are now presented before the concepts of subjectivity and objectivity, and the section ends with the concepts of estimates and calibration. O-RA now also emphasizes having usefully precise estimates; in other words, an estimate is usefully precise if more precision would not improve or change the decision being made with the information. The concept of “Confidence Level in the Most Likely Value” as a parameter to model estimates has been removed from O-RA in bringing it to Version 2.0. Instead, this concept has been replaced by the choice of distribution that best represents what the Open FAIR risk analyst knows about the risk factor being modelled; however, Open FAIR is agnostic on the distribution type used. O-RA Version 2.0 also takes inspiration the Open FAIR™ Risk Analysis Process Guide to better define how to do an Open FAIR risk analysis in Section 5: Risk Analysis Process and Methodology. To do this, O-RA specifies that a risk analyst must first scope the analysis by identifying a Loss Scenario (Stage 1). The Loss Scenario is the story of loss that forms a sentence from the perspective of the Primary Stakeholder.


'Return to Office' Phishing Emails Aim to Steal Credentials

In the phishing campaign uncovered by Abnormal Security, the emails are disguised as an automated internal notification from the company as indicated by the sender's display name. "But the sender's actual address is 'news@newsletterverwaltung.de,' an otherwise unknown party," the research report states. "Further, the IP originates from a blacklisted VPN service that is not consistent with the corporate IP, which indicates the sender is impersonating the automated internal system." The emails, sent to specific employees, contain an HTML attachment that bears the recipient's name, which lures employees into opening it. The email also contains text that makes it seem as if the recipient has received a voicemail, researchers state. By clicking on the attachment, the user is redirected to a SharePoint document with new instructions on the company's remote working policy. "Underneath the new policy, there is text that states 'Proceed with acknowledgement here.' Clicking on this link redirects the user to the attack landing page, which is a form to enter the employee's email credentials," researchers note. Once a recipient falls victim to this trap, the login credentials for their email account are harvested.


CIO interview: John Davidson, First Central Group

“Intelligent automation means so much more for us than an efficiency tool,” says Davidson. “We are building an entirely new technical competency into our business, so that it becomes part of our DNA. This not only changes operational execution but, importantly, changes the management mindset about the art of the possible and strategic decision-making.” The automated renewal process is another area where Blue Prism has been deployed. With the support of Blue Prism’s partner, IT and automation consultancy T-Tech, the First Central team can check for accuracy the issue of more than 3,000 renewal invitations daily in just two hours. The new process verifies each renewal notice, removing the need for costly, time-intensive manual work downstream to correct anomalies and reduce the risk of a regulatory incident. Along with driving operational efficiencies, Davidson believes RPA also boosts business confidence. “Risk mitigation is a lot more intangible, but can measure the cost of distraction and can measure our effectiveness from a robotics perspective,” he says. Davidson’s team has established a robotics capability for the business capability. “It is not my job to close down operational risk,” he says. “That’s the responsibility of the process owner. My team has to deliver technology that closes down the risk.


Q&A on the Book The Power of Virtual Distance

Virtual work gives us many options as to where, when and how to work. And this is highly useful and a positive development. However, as we discovered from the beginning, the trade-offs and unintended consequences are extensive and need to be corrected. When we work mainly through screens, the human contextual markers that guide our cognitive and emotional selves, to know who we can trust and under what circumstances, disappear behind virtual curtains. We have shown conclusively that high Virtual Distance is the statistical equivalent of Distrust, while lower Virtual Distance results in the strong trust bonds we need to build relationship foundations that ultimately result in both better work product as well as higher levels of well-being. Recently a senior executive from a large global company expressed his concern regarding the fact that many leaders do not trust their employees to work virtually. And we’ve found that it’s a two-way street, as many employees don’t trust their leaders to assess or treat them fairly under these conditions. The erosion of trust was highly problematic before Covid19. Now, it’s risen to the level of a “crisis of distrust”. 


Why I'd Take Good IT Hygiene Over Security's Latest Silver Bullet

The most common way to perform lateral movement is to reuse privileges in the assets that attackers have a foothold on, such as secrets and credentials stored on breached machines. Vendors will preach that they can distinguish between legitimate traffic and lateral movements — to even automatically block such illicit activity. They'll use terms like machine learning and AI to make their product sound advanced, but these capabilities are very limited. The product may block well-known malware that performs the exact same sequence in any invocation and hence was "signed" by them — making such products glorified, network-based, signature-matching systems. But because AI and machine learning are based on training, they aren't able to distinguish between legitimate traffic and lateral movement with an accuracy that fully supports runtime prevention. Moreover, no one knows how these applications work in all scenarios. Are you willing to block traffic just because it hasn't been seen before? Or what about an edge case in the app it's never seen? On the other hand, managing lateral movement risk is definitely possible. This can be done by analyzing the secrets and privileges stored and associated with any given asset and determining if they're overly permissive. 


Automation Justification

The human touch is also recommended in code reviews — yes, please use the code grammar checkers and test coverage tools, but getting your code reviewed and reviewing others’ code benefits everyone involved. Sometimes folks worry about the cost of tools and labor to get the process started. Lastly, when starting a larger automation project, do not try to do everything at once. Prioritizing and easing into the automation process makes it simpler and increases the probability it can be done with no loss of functionality. In terms of naysayers, some of the reasons given by humans are “if it ain’t broke, don't fix it,” some don't feel comfortable if they are not in control, sometimes the person does not understand the tools needed, and some folks feel like a computer will replace their job. So what do we do? Show them the metrics that can show improvements, teach them how to use the tools, or just let them know that now that their time is freed up; they can do more meaningful, fun, cool stuff with their time. Alluding back to an earlier slide, here are some metrics that will show your team, your management, and the bean counters some improvement: cost and time savings; test coverage and speedup; customer satisfaction; fewer defects; faster time to release, as well as to recover from issues; and reduced risk.


The vicious cycle of circular dependencies in microservices

In software engineering, modularity refers to the degree to which an application can be divided into independent, interchangeable modules that work together to form a single functioning item that can serve a specific business function. Modularity promotes reusability, better maintenance and manageability and promotes low coupling and high cohesion. Despite the benefits it offers, modular design is still plagued by dependency problems. In a typical microservices architecture, you'll often encounter dependencies among the services and components. Although these services are modeled as isolated, independent units, they still need to communicate for the purpose of data and information exchange. Ideally, a microservices application shouldn't contain circular dependencies. This means that one service should not call another one directly. Instead, those services should operate on event-based triggers. However, reality dictates that most developers will still need to closely link certain parts of an application, and problematic dependencies will persist. A circular dependency is defined as a relationship between two or more application modules that are codependent.


What is cyber insurance? Everything you need to know about what it covers and how it works

Different policy providers might offer coverage of different things, but generally cyber insurance coverage will be likely to cover the immediate costs associated with falling victim to a cyberattack. "Cyber insurance policies are designed to cover the costs of security failures, including data recovery, system forensics, as well as the costs of legal defence and making reparations to customers," says Mark Bagley, VP at cybersecurity company AttackIQ. Underwriting data recovery and system forensics, for example, would help cover some of the cost of investigating and re-mediating a cyberattack by employing forensic cybersecurity professionals to aid in finding out what happened – and fix the issue. This is the sort of standard procedure that follows in the aftermath of a ransomware attack, one of the most damaging and disrupting kinds of incident an organisation can face right now. It is also the case that some cyber insurance companies tcover the cost of actually giving in and paying a ransom – even though that's something that law enforcement and the information security industry doesn't recommend, as it just encourages cyber criminals to commit more attacks.



Quote for the day:

"Leadership is not a position. It is a combination of something you are (character) and some things you do (competence)." -- Ken Melrose

Daily Tech Digest - December 01, 2020

Beginner's Guide to Quantum Machine Learning

Whenever you think of the word "quantum," it might trigger the idea of an atom or molecule. Quantum computers are made up of a similar idea. In a classical computer, processing occurs at the bit-level. In the case of Quantum Computers, there is a particular behavior that governs the system; namely, quantum physics. Within quantum physics, we have a variety of tools that are used to describe the interaction between different atoms. In the case of Quantum Computers, these atoms are called "qubits" (we will discuss that in detail later). A qubit acts as both a particle and a wave. A wave distribution stores a lot of data, as compared to a particle (or bit). Loss functions are used to keep a check on how accurate a machine learning solution is. While training a machine learning model and getting its predictions, we often observe that all the predictions are not correct. The loss function is represented by some mathematical expression, the result of which shows by how much the algorithm has missed the target. A Quantum Computer also aims to reduce the loss function. It has a property called Quantum Tunneling which searches through the entire loss function space and finds the value where the loss is lowest, and hence, where the algorithm will perform the best and at a very fast rate.


How to Develop Microservices in Kubernetes

Iterating from local development to Docker Compose to Kubernetes has allowed us to efficiently move our development environment forward to match our needs over time. Each incremental step forward has delivered significant improvements in development cycle time and reductions in developer frustration. As you refine your development process around microservices, think about ways you can build on the great tools and techniques you have already created. Give yourself some time to experiment with a couple of approaches. Don’t worry if you can’t find one general-purpose one-size-fits-all system that is perfect for your shop. Maybe you can leverage your existing sets of manifest files or Helm charts. Perhaps you can make use of your continuous deployment infrastructure such as Spinnaker or ArgoCD to help produce developer environments. If you have time and resources, you could use Kubernetes libraries for your favorite programming language to build a developer CLI to manage their own environments. Building your development environment for sprawling microservices will be an ongoing effort. However you approach it, you will find that the time you invest in continuously improving your processes pays off in developer focus and productivity.


Enabling the Digital Transformation of Banks with APIs and an Enterprise Architecture

One is the internal and system APIs. Core banking systems are monolith architectures. They are still based on mainframes and COBOL [programming language]. They are legacy technologies and not necessarily coming out of the box with open APIs. Having internal and system APIs helps to speed up the development of new microservices-based on these legacy systems or services that use legacies as back-ends. The second category of APIs is public APIs. These are APIs that connect a bank’s back-end systems and services. They are a service layer, which is necessary for external services. For example, they might be used to obtain a credit rating or address validation. You don’t want to do these validations for yourself when the validity of a customer record is checked. Take the confirmation of postal codes in the U.S. In the process of creating a customer record, you use an API from your own system to link to an external express address validation system. That system will let you know if the postal code is valid or not. You don’t need to have their own internal resources to do that. And the same applies, obviously, to credit rating, which is information that you can’t have as a bank. The third type of API, and probably the most interesting one, is the public APIs that are more on the service and front-end layers.


Can't Afford a Full-time CISO? Try the Virtual Version

For a fraction of the salary of a full-time CISO, companies can hire a vCISO, which is an outsourced security practitioner with executive level experience, who, acting as a consultant, offers their time and insight to an organization on an ongoing (typically part-time) basis with the same skillset and expertise of a conventional CISO. Hiring a vCISO on a part-time (or short-term basis) allows a company the flexibility to outsource impending IT projects as needed. A vCISO will work closely with senior management to establish a well communicated information security strategy and roadmap, one that meets the requirements of the organization and its customers, but also state and federal requirements. Most importantly, a vCISO can provide companies unbiased strategic and operational leadership on security policies, guidelines, controls, and standards, as well as regulatory compliance, risk management, vendor risk management, and more. Since vCISOs are already experts, it saves the organization time and money by decreasing ramp-up time. Businesses are able to eliminate the cost of benefits and full-time employee onboarding requirements. 


Why the insurance industry is ready for a data revolution

As it stands today, when a customer chooses a traditional motor insurance policy and is provided with a quote, the price they are given will be based on broad generalisations made about their personal background as an approximate proxy for risk. This might include their age, their gender, their nationality, and there have even been examples of people being charged hundreds of pounds more for policies because of their name. If this kind of profiling took place in other financial sectors, there would be outcry, so why is insurance still operating with such an outdated model? Well, up until now, there has been little innovation in the insurance sector and as a result, little alternative in the way that policies can be costed. But now, thanks to modern telematics, the industry finally has the ability to provide customers with an accurate and fair policy, based on their true risk on the road: how they really drive. Telematics works by monitoring and gathering vehicle location and activity data via GPS and today we can track speed, the number of hours spent in the vehicle, the times of the day that customers are driving, and even the routes they take. We also have the technology available to consume and process swathes of this data in real time.


Foiling RaaS attacks via active threat hunting

One of the tactics that really stands out, and they’re not the only attackers to do it, but they are one of the first to do it, is actually making a copy and stealing the victim’s data prior to the ransomware payload execution. The benefit that the attacker gets from this is they can now leverage this for additional income. What they do is they threaten the victim to post sensitive information or customer data publicly. And this is just another element of a way to further extort the victim and to increase the amount of money that they can ask for. And now you have these victims that have to worry about not only having all their data taken from them, but actual public exposure. It’s becoming a really big problem, but those sorts of tactics – as well as using social media to taunt the victim and hosting their own infrastructure to store and post data – all of those things are elements that prior to seeing it used with Ransomware-as-a Service, were not widely seen in traditional enterprise ransomware attacks. ... You can’t trust that paying them is going to keep you protected. Organizations are in a bad spot when this happens, and they’ll have to make those decisions on whether it’s worth paying.


Sizing Up Synthetic DNA Hacking Risks

Rami Puzis, head of the Ben-Gurion University Complex Networks Analysis Lab and a co-author of the study, tells ISMG that the researchers decided to examine potential cybersecurity issues involving the synthetic bioengineering supply chain for a number of reasons. "As with any new technology, the digital tools supporting synthetic biology are developed with effectiveness and ease of use as the primary considerations," he says. "Cybersecurity considerations usually come in much later when the technology is mature and is already being exploited by adversaries. We knew that there must be security gaps in the synthetic biology pipeline. They just need to be identified and closed." The attack scenario described by the study underscores the need to harden the synthetic DNA supply chain with protections against cyber biological threats, Puzis says. "To address these threats, we propose an improved screening algorithm that takes into account in vivo gene editing. We hope this paper sets the stage for robust, adversary resilient DNA sequence screening and cybersecurity-hardened synthetic gene production services when biosecurity screening will be enforced by local regulations worldwide."


Securing the Office of the Future

The vast majority of the things that we see every day are things that you never read about or hear about. It’s the proverbial iceberg diagram. That being said, in this interesting and very unique time that we are in, there is a commonality—and Sean’s actually already mentioned it once today—there are two major attack patterns that we’re seeing over and over, and these are not new things, they’re just very opportunistically preyed upon right now because of COVID and because of the remote work environment, but that’s ransomware and kind of spear phishing or regular old phishing attacks. Because people are at a distance and expected to be working virtually today and threat actors know that, so they’re getting better and better at laying booby traps, if you will, and e-mail to get people to click on attachments and other sorts of links. ... Coincidentally, or perhaps not coincidentally, one of the characters in our comic is called Phoebe the Phisher, and we were very deliberate about creating that character. She has a harpoon, of course, which is for, you know, whale phishing. She has a spear for targeted spear phishing, and she also has a, you know, phishing rod for kind of regular, you know, spray and pray kind of phishing.


How to maximize traffic visibility with virtual firewalls

The biggest advantage of a virtual firewall, however, is its support for the obvious dissolution of the enterprise perimeter. Even if an active edge DMZ is maintained through load balanced operation, every enterprise is experiencing the zero trust-based extension of their operation to more remote, virtual operation. Introducing support for virtual firewalls, even in traditional architectures, is thus an excellent forward-looking initiative. An additional consideration is that cloud-based functionality requires policy management for hosted workloads – and virtual firewalls are well-suited to such operation. Operating in a public, private, or hybrid virtual data center, virtual firewalls can protect traffic to and from hosted applications. This can include connections from the Internet, or from tenants located within the same data center enclave. One of the most important functions of any firewall – whether physical or virtual – involves the inspection of traffic for evidence of anomalies, breaches, or other policy violations. It is here that virtual firewalls have emerged as offering particularly attractive options for enterprise security teams building out their threat protection.


More than data

First of all, the system has to be told where to find the various clauses in a set of sample contracts. This can be easily done by marking the respective portions of text and labelling them with the clauses names they contain. On this basis we can train a classifier model that – when reading through a previously unseen contract – recognises what type of contract clause can be found in a certain text section. With a ‘conventional’ (i.e. not DL-based) algorithm, a small number of examples should be sufficient to generate an accurate classification model that is able to partition the complete contract text into the various clauses it contains. Once a clause is identified within a certain contract of the training data, a human can identify and label the interesting information items contained within. Since the text portion of one single clause is relatively small, only a few examples are required to come up with an extraction model for the items in one particular type of clause. Depending on the linguistic complexity and variability of the formulations used, this model can be either generated using ML, by writing extraction rules making use of keywords, or – in exceptionally complicated situations – by applying natural language processing algorithms digging deep into the syntactic structure of each sentence.



Quote for the day:

"You have achieved excellence as a leader when people will follow you everywhere if only out of curiosity." -- General Colin Powell