Daily Tech Digest - August 18, 2021

True Success in Process Automation Requires Microservices

To future-proof early investments in RPA, organizations need to implement an orchestration layer separate from their bot layer. Many RPA implementations lack a “driving” capability that can connect one process to another. In the insurance example above, one bot that inputs a claim can connect to another that inputs data into the modern CRM system (and so on until the claims process is completed). To take that modernization a step further, development teams can focus on replacing RPA bots one by one with applications built on a microservices architecture. The idea of ripping and replacing legacy systems is expensive and daunting for most organizations. In reality, gradual digital transformation makes more sense. RPA bots can help enable this transformation by keeping legacy systems functional while developers re-architect and modernize business applications in order of priority. Think of it as switching a house over to LED bulbs one by one — you can still keep the lights on in the rest of the house as each bulb gets updated.


Ransomware recovery: 8 steps to successfully restore from backup

"In many cases, enterprises don't have the storage space or capabilities to keep backups for a lengthy period of time," says Palatt. "In one case, our client had three days of backups. Two were overwritten, but the third day was still viable." If the ransomware had hit over, say, a long holiday weekend, then all three days of backups could have been destroyed. "All of a sudden you come in and all your iterations have been overwritten because we only have three, or four, or five days." ... In addition to keeping the backup files themselves safe from attackers, companies should also ensure that their data catalogs are safe. "Most of the sophisticated ransomware attacks target the backup catalog and not the actual backup media, the backup tapes or disks, as most people think," says Amr Ahmed, EY America's infrastructure and service resiliency leader. This catalog contains all the metadata for the backups, the index, the bar codes of the tapes, the full paths to data content on disks, and so on. "Your backup media will be unusable without the catalog," Ahmed says. 


LockBit 2.0 Ransomware Proliferates Globally

“Once in the domain controller, the ransomware creates new group policies and sends them to every device on the network,” Trend Micro researchers explained. “These policies disable Windows Defender, and distribute and execute the ransomware binary to each Windows machine.” This main ransomware module goes on to append the “.lockbit” suffix to every encrypted file. Then, it drops a ransom note into every encrypted directory threatening double extortion; i.e., the note warns victims that files are encrypted and may be publicly published if they don’t pay up. ... Trend Micro has been tracking LockBit over time, and noted that its operators initially worked with the Maze ransomware group, which shut down last October. Maze was a pioneer in the double-extortion tactic, first emerging in November 2019. It went on to make waves with big strikes such as the one against Cognizant. In summer 2020, it formed a cybercrime “cartel” – joining forces with various ransomware strains (including Egregor) and sharing code, ideas and resources.


Mandiant Discloses Critical Vulnerability Affecting Millions of IoT Devices

Over the course of several months, the researchers developed a fully functional implementation of ThroughTek’s Kalay protocol, which enabled the team to perform key actions on the network, including device discovery, device registration, remote client connections, authentication, and most importantly, process audio and video (“AV”) data. Equally as important as processing AV data, the Kalay protocol also implements remote procedure call (“RPC”) functionality. This varies from device to device but typically is used for device telemetry, firmware updates, and device control. Having written a flexible interface for creating and manipulating Kalay requests and responses, Mandiant researchers focused on identifying logic and flow vulnerabilities in the Kalay protocol. The vulnerability discussed in this post affects how Kalay-enabled devices access and join the Kalay network. The researchers determined that the device registration process requires only the device’s 20-byte uniquely assigned identifier (called a “UID” here) to access the network.


CQRS in Java Microservices

Command Query Responsibility Separation (CQRS) is a pattern in service architecture. It is a separation of concerns, that is, separation of services that write from services that read. Why would you want to separate read and write services? One of the advantages of microservices is the ability to scale services independently. We can often say with some level of certainty that one set of services will be busier than others. If they are separate, they can be scaled to best fit the normal use case and conserve cloud cycles. I will be looking into CQRS provided by the Axon library. Axon implements CQRS with event sourcing. The idea behind event sourcing is that your commands are executed by sending events to all subscribers. Instead of storing state in your persistence store, you store the immutable events, so you always have a record of the events that led up to a particular state. Inside your program, you will have an aggregate, which represents a stateful object but is ephemeral in that the system can bring it in and out of existence as needed.


AIOps Strategies for Augmenting Your IT Operations

Enrichment is the unsung hero of the entire event correlation process. Raw alarm data is a start, but it’s not sufficient to be able to pinpoint the root cause and enable an effective fix. When you have alerts coming in from a variety of domains, it can be difficult to correlate them to produce a fine-tuned set of tickets. You can use timestamps or point of origin, but that will provide limited insight, and you'll miss connections between related alerts coming from other sources or from other time windows. Easy-to-deploy alert enrichments add value to every single alert, providing the extra layer of understanding needed to determine which alerts are interrelated, and in what way, enable you to focus on high-level correlated incidents, instead of following every low-level alert that comes in the AIOps platform. Done right, this process of enrichment reduces the ‘noise’, and helps you bring in topology information from your CMDB, APM, and orchestration tools, change information from your change management and CI/CD pipelines, and business context from your team’s knowledge and procedures.


Addressing the demand for global software developer talent

Clear upskilling career paths should be provided for new and experienced software developers. Younger developers will expect rapid career advances — show them fast and more attractive ways forward, such as more opportunities to work on innovation projects and technologies or earn a new job title or salary due to learning a new skill. Experienced developers may want more time to explore new technologies, some freedom to decide what to work on next, or just shore up what they have been working on for years. A mentoring programme connecting graduates with more experienced developers is a good idea. However, it may add an onerous workload. Supplement that ‘human’ support with tools that, for instance, help monitor code quality, engendering a consistent coding practice level and preventing the number of errors that escape into production. Be flexible with everyone’s working hours, location, and choice of tools. Give them superior quality hardware and other workplace products to make their jobs as easier. Online training and permission to spend work time on it are essential.


IT Leadership: 11 Future of Work Traits

“Three- to five-year plans got smashed into a single year plan,” says Sarah Pope, VP of Future of Technology, global consulting company Capgemini. “Two priorities that became obvious as a result of COVID are customer experience and employee experience. Customer experience didn't have to be just 'good,' it needed to reflect customers' new behaviors and patterns. Similarly, employee experience wasn't just about technology enablement and corporate culture, but about how work fits into digital lives.” Enterprises have been pushing to reopen their offices and business leaders are well aware that not everyone will want to return. While there's a general acknowledgement that hybrid workplaces will be the norm going forward, few organizations know what that will really look like. However, it's obvious that if some people refuse to return to the office at all, and others only want to work in the office a couple of days per week, businesses need to make smart use of space, people, and time. ... “Secondarily, [they'll want to bring people together in a physical environment] from a maintaining culture and community perspective, [such as hosting] those dinners or workshops that can tack on to a team event.”


5 things to know about pay-per-use hardware

When enterprise IT teams get a quote for consumption-based infrastructure, many will find themselves in unfamiliar territory, having never evaluated this kind of pricing scheme. “It’s easy for HP or Dell to come in and say how much they’re going to charge you per core, but then you realize you have no idea whether that price is fair. That’s not how you calculate things in your own facilities, and it’s apples to oranges versus public cloud costs,” Bowers said. “As soon as enterprises are given a quote, they tend to go into spreadsheet hell for three months, trying to figure out whether that quote is fair. So it can take three, four, five months to negotiate a first deal.” Enterprises struggle to evaluate consumption-based proposals, and they lack confidence in their usage forecasts, Bowers said. “It takes a lot of financial acumen to adopt one of these programs.” Experience can help. “The companies that make the most confident decisions are those that did a lot of leasing in the past. Not because this is a lease, but because those companies have the mental muscles to be able to evaluate the financial aspects of time, value, variable payments, and risks of payment spreads,” Bowers said.


An upbeat outlook for UK IoT sector despite barriers

The concern within the UK IoT sector reflects the fact that permanent roaming – as the typical solution to delivering multi-region IoT projects – remains fraught with problems. These range from the inability of roaming agreements to support device Power Saving Modes, to the frequently arising commercial disputes, the performance issues caused by having to backhaul data, and the fact several countries have placed a complete ban on permanent roaming. In contrast to the US environment, where two dominant operators (AT&T and Verizon) deliver the majority of coverage, the European environment is far more fragmented with multiple operators delivering regional coverage. This adds a considerable layer of complexity and commercial disputes can threaten the viability of multi-region rollouts, creating a concerning degree of risk to IoT projects. UK IoT professionals are very aware that this issue of cellular connectivity must be resolved to ensure the viability of future large-scale projects – eight out of ten agree or strongly agree that the evolution of intelligent connectivity is going to be critical to continue to fuel adoption of IoT.



Quote for the day:

"Good leaders must first become good servants." -- Robert Greenleaf

Daily Tech Digest - August 17, 2021

It May Be Too Early to Prepare Your Data Center for Quantum Computing

The fact that there are multiple radically different approaches to quantum computing under development, with no assurance that any will meet market success (let alone market dominance), speaks to quantum computing's infancy. Merzbacher compares the situation to the early days of microprocessors, when there was a debate on whether computer chips should be made of silicon or germanium. "There were arguments for germanium. It's a better system for semiconductor computing in some sense, but it's expensive, not as easy to manufacture, and it's not as common, so in the end, it was silicon," she said. Quantum computing hasn't reached a point where "everybody settled on a technology here, and so there still is uncertainty. It may be that the IBM approach is better for certain types of computing, and then the trapped-ion approaches [are] better for others." This past March, IonQ became the first publicly traded pure-play quantum computing company via a SPAC merger. According to Merzbacher, the startup appears to have its eye on marketing rack-mounted quantum hardware to the data center market, although it hasn't voiced such intentions publicly.


Lucas Cavalcanti on Using Clojure, Microservices, Hexagonal Architecture ...

One thing to mention about the Cockburn Hexagonal Architecture, is that it was born into a Java object or entered word. And just to get a context. So what we use, it's not exactly that implementation. But it uses that idea as an inspiration. So I think on the Coburn's idea is you have a web server. And at every operation that web server is a port and you'll have the adapter, which a port that's an interface. And then the above adapter is the actual implementation of that interface. And the rest is how to implement the classes implement in that. The implementation, we use that idea of separating a port, that it's the communication with the external world from the adapter, which is the code that translate that communication to actual code that you can execute. And then the controller is the piece that gets that communication from the external world, and runs the actual business logic. I think the Cockburn definition stops at the controller. And after the controller, it's already business logic. Since we are working on Clojure and functional programming.


Excel 4, Yes Excel 4, Can Haunt Your Cloud Security

Scary? Sure, but still, how hard can it be to spot a macro attack? It’s harder than you might think. Vigna explained XLM makes it easy to create dangerous but obfuscated code. It started with trivial obfuscation methods. For example, the code was written hither and yon on and written using a white font on a white background. Kid’s stuff. But, later versions started using more sophisticated methods such as hiding by using the VeryHidden flag instead of Hidden. Users can’t unhide a VeryHidden flag from Excel. You must uncover VeryHidden data with a VBA script or even resort to a hex editor. How many Excel users will even know what a hex editor is, never mind use it? Adding insult to injury, Excel 4 doesn’t differentiate between code and data. So, yes what looks like data may be executed as code. It gets worse. Vigna added “Attackers may build the true payload one character at a time. They may add a time dependence, making the current day a decryption key for the code. On a wrong day, you’ll just see gibberish.” As VMware security researcher Stefano Ortolani added, Excel 4.0 macros are “easy to use but also easy to complicate.”


Agile Data Labeling: What it is and why you need it

The concept of Autolabeling, which consists of using an ML model to generate “synthetic” labels, has become increasingly popular in the most recent years, offering hope to those tired of the status quo, but is only one attempt at streamlining data labeling. The truth, though is, no single approach will solve all issues: at the center of autolabeling, for instance, is a chicken-and-egg problem. That is why the concept of Human-in-the-Loop labeling is gaining traction. That said, those attempts feel uncoordinated and bring little to no relief to companies who often struggle to see how those new paradigms apply to their own challenges. That’s why the industry is in need of more visibility and transparency regarding existing tools (a wonderful initial attempt at this is the TWIML Solutions Guide, though it’s not specifically targeted towards labeling solutions), easy integration between those tools, as well as an end-to-end labeling workflow that naturally integrates with the rest of the ML lifecycle. Outsourcing the process might not be an option for specialty use cases for which no third party is capable of delivering satisfactory results. 


Brain-computer interfaces are making big progress this year

The ability to translate brain activity into actions was achieved decades ago. The main challenge for private companies today is building commercial products for the masses that can find common signals across different brains that translate to similar actions, such as a brain wave pattern that means “move my right arm.” This doesn’t mean the engine should be able to do so without any fine tuning. In Neuralink’s MindPong demo above, the rhesus monkey went through a few minutes of calibration before the model was fine-tuned to his brain’s neural activity patterns. We can expect this routine to happen with other tasks as well, though at some point the engine might be powerful enough to predict the right command without any fine-tuning, which is then called zero-shot learning. Fortunately, AI research in pattern detection has made huge strides, specifically in the domains of vision, audio, and text, generating more robust techniques and architectures to enable AI applications to generalize. The groundbreaking paper Attention is all you need inspired many other exciting papers with its suggested ‘Transformer’ architecture. 


Here’s how hackers are cracking two-factor authentication security

Our experiments revealed a malicious actor can remotely access a user’s SMS-based 2FA with little effort, through the use of a popular app (name and type withheld for security reasons) designed to synchronize user’s notifications across different devices. Specifically, attackers can leverage a compromised email/password combination connected to a Google account (such as username@gmail.com) to nefariously install a readily available message mirroring app on a victim’s smartphone via Google Play. This is a realistic scenario since it’s common for users to use the same credentials across a variety of services. Using a password manager is an effective way to make your first line of authentication — your username/password login — more secure. Once the app is installed, the attacker can apply simple social engineering techniques to convince the user to enable the permissions required for the app to function properly. For example, they may pretend to be calling from a legitimate service provider to persuade the user to enable the permissions. After this, they can remotely receive all communications sent to the victim’s phone, including one-time codes used for 2FA.


Agile drives business growth, but culture is stifling progress

Senior leaders who invest in upskilling will ensure a culture of innovation in the enterprise. Skills needed today and in the future are identified and learning curves accelerated by providing immersive experiences to supplement learning. At Infosys, we categorize employees into different skill horizons based on workers’ core, digital, and emerging skills. For staying close to the customer through better insights, data is not just a lazy asset locked in systems of record — it is accessible through an end-to-end system that translates customer insights into action. Going further, artificial intelligence taps into unspoken team behaviors and interactions, which research from CB Insights found increases revenue by as much as 63%. Teams will also need to collaborate effectively and make decisions on their own. This will only happen if leaders understand when to guide and when to trust. In our research, we found that the most effective Agile firms (we call these “Sprinters”) are much more likely to foster servant leadership, along with the seven levers described.


Attackers Change Their Code Obfuscation Methods More Frequently

In an analysis posted last week, researchers at the Microsoft 365 Defender Threat Intelligence Team tracked one cybercriminal group's phishing campaign as the techniques changed at least 10 times over the span of a year. The campaign, dubbed XLS.HTML by the researchers, used plaintext, escape encoding, base64 encoding, and even Morse code, the researchers said. Changing up the encoding of attachments and data is not new, but highlights that attackers understand the need to add variation to avoid detection, the Microsoft researchers said. Microsoft's research is not the first to identify the extensive use of obfuscation. Such techniques are as old as malware itself, but more recently, attackers are switching up their obfuscation techniques more frequently. In addition, increasingly user-friendly tools used by cybercriminals intent on phishing make using sophisticated obfuscation much easier. Messaging security provider Proofpoint documented seven obfuscation techniques in a paper published five years ago, and even then, many of the obfuscation techniques were not new, the company said.


Navigating an asymmetrical recovery

The key for many businesses will be to build scenarios that account for a wider diffusion of results than was needed in the past. Take the cinema business as an example. Instead of sales projections being drawn up in a band between down-10% and up-10%, we’ve seen that some businesses can find themselves in a band between down-70% and up-80%. An unexpected upside sounds like a nice problem to have, but it also can create real operating challenges. Few of the companies whose growth was supercharged during the pandemic had a plan for that level of growth, which led to shortages, stock-outs, and delays that undermined performance. Planning for extremes is almost certain to be critical for some time to come. Although there is considerable liquidity overall in the debt markets, whether from traditional loans, bonds, or newer debt funds, companies’ ability to access these markets will vary widely. Regional and country differences in government support, along with variations in capital availability between companies of different sectors and size, are all creating additional asymmetries and unpredictable balance sheet pressures. 


Driving DevOps With Value Stream Management

A value stream, such as a DevOps pipeline, is simply the end-to-end set of activities that delivers value to our customers, whether internal or external to the organization. In an ideal state, work and information flow efficiently with minimal delays or queuing or work items. So far, this all sounds great. But good things seldom come easily. Let's start with the fact that there are hundreds of tools available to support a Dev(Sec)Ops toolchain. Moreover, it takes specific skills, effort, costs, and time to integrate and configure the tools selected by your organization. While software developers perform the integration effort, the required skills may differ from those available in your software development teams. Also, such work takes your developers away from their primary job of delivering value via software products for your internal and external customers. In short, asking your development teas to build their Dev(Sec)Ops toolchain configurations is a bit like asking manufacturing operators to build their manufacturing facilities. 



Quote for the day:

"Great leaders are almost always great simplifiers who can cut through argument, debate and doubt to offer a solution everybody can understand." -- General Colin Powell

Daily Tech Digest - August 16, 2021

Pepperdata CEO says AI ambitions outpace data management reality

When we had just classic databases, data warehouses, and stuff like data was managed sort of centrally, people had a very well-defined view of what was going on. It was very narrow in scope. That definition has been blown to smithereens. It’s like everything is enterprise data. It’s just ballooned. ... People are realizing that data for customer success is really important. That part is becoming more obvious to more people. If somebody comes to my website, and I take three days to respond to them, they’re going to be gone. But if I can respond to them in 30 seconds and say something intelligent, all of a sudden that interaction becomes much more valuable. My sales cycles become much shorter. The rest of it, concerning how to use the data to more efficiently run my business, however, is completely unclear at this point. ... Every time we do a new technology and all of a sudden people invest a ton in it, then you find your finance people are writing it off. This is no different. The data wave has been hyped so much that people are putting more and more money into it. They got to be like Google. They have to be like Facebook. 


Banks are moving their core operations into the cloud at a rapid rate. But new tech brings new challenges

As with any concentrated market, there is a risk that cloud providers might start dictating their own terms, at the expense of the stability of the financial system. For example, they could refuse to be transparent by failing to open up their technologies to third-party scrutiny, meaning that it would be impossible to know if providers have baked in sufficient resiliency to carry out banking operations. Modernizing is key, therefore, but it needs to be done cautiously, and with a reliable strategy. For James, the best way forward is to deploy multi-cloud configurations in the financial sector to balance the risk across multiple providers. Only 17% of the financial institutions surveyed by Google have already adopted multi-cloud as an architecture of choice, while 28% rely on single cloud. According to the company, more work needs to be done from a regulatory aspect to incentivize a robust and responsible adoption of cloud among financial organizations. "Consumers' demand for very quick transformation is becoming really overwhelming, and financial services organizations will take shortcuts to deliver on customer expectations as soon as possible," said James.


What is federated learning?

Federated learning starts with a base machine learning model in the cloud server. This model is either trained on public data (e.g., Wikipedia articles or the ImageNet dataset) or has not been trained at all. In the next stage, several user devices volunteer to train the model. These devices hold user data that is relevant to the model’s application, such as chat logs and keystrokes. These devices download the base model at a suitable time, for instance when they are on a wi-fi network and are connected to a power outlet (training is a compute-intensive operation and will drain the device’s battery if done at an improper time). Then they train the model on the device’s local data. After training, they return the trained model to the server. Popular machine learning algorithms such as deep neural networks and support vector machines is that they are parametric. Once trained, they encode the statistical patterns of their data in numerical parameters and they no longer need the training data for inference. Therefore, when the device sends the trained model back to the server, it doesn’t contain raw user data. 


How One Rogue User Took Down Our API

A team of developers won’t be able to suss out all the various bugs in your services, but thousands of users will. And it only takes one to exploit a weakness. While our zealous user was the flapping butterfly wing that lead to the tornado, it was aided and abetted by our own bad assumptions. Fortunately, there are strategies and tools you can use to mitigate these situations. If you’re lucky, you have a Quality Assurance team dedicated to catching bugs. Have you heard the one about a QA tester walking into a bar? Even if you do have a QA team — and especially if you don’t — automated load, end-to-end, and fuzz testing will also help catch those tricky bugs. I would recommend reading Martin Fowler’s article on The Practical Test Pyramid. In the end, APIs are like chainsaws. They are powerful tools intended to that empower our users. But that power needs to come with the necessary safety measures. Without them, your users may end up causing a lot of undue damage to both themselves and you.


Reliance on third party workers making companies more vulnerable to cyberattacks

Too many organizations lack automated and effective methods to centrally track and manage their relationships with the burgeoning number of third parties with whom they do business. This, coupled with the lack of information organizations have about these third parties, makes them a cybercriminal’s best friend. The recent Presidential Executive Order (EO) mandates the federal government “improve its efforts to identify, deter, protect against, detect, and respond to these actions and actors.” For organizations looking to make changes to their third party identity risk security measures, there are steps they can implement today including: properly identifying who each third party is and the sensitive data to which they have access; conducting regular user audits to ensure third parties have access based on the least amount of privilege necessary to do their jobs; extending zero trust programs to third party non-employees; and conducting continuous risk ratings of the individuals working within a third party vendor or partner, not just the organization as a whole.


The Intersection of Ecommerce and NFTs: How NFT Technology is Changing DeFi

DeFi (decentralized finance) technology allows for the inherent convenience of centralized markets without allowing the wealth and governance authority to pool into one person’s wallet. Essentially, DeFi is enabled by the blockchain, which enables permission-less, peer-to-peer transactions. This removes middlemen like banks and other large financial institutions. It lowers costs and technical barriers for entrepreneurs and individuals. Fees, documentation, and legal jurisdictions prevent many people across the world from accessing the financial tools they need to succeed. DeFi platforms circumvent the need for all of these things and allow them to transact in a secure environment. NFTs are the driving force behind a significant portion of the DeFi infrastructure. NFTs aren’t limited to collectibles. They represent programmable bits of data stored on the blockchain. The blockchain provides a transparent, hack-proof storage solution. This equates to ownership over pieces of data that can be programmed to do different things when interacted with.


Top seven transformation trends dominating the digital ecosystem

Digital transformation essentially boils down to unlocking value for customers. McKinsey estimates that digital transformation initiatives that focus on customer-centricity increase customer satisfaction by 20-30% and economic gains by 20-50%. Organisations investing in digital transformation are looking to deliver innovative and seamless customer experiences in real-time. There is a greater focus on customer lifetime value (CLV) and the role of innovative customer experiences on long-term customer value. In a continuously evolving digital ecosystem, with no dearth of choice and convenience, customer behaviours are rapidly changing. In such a world, businesses need a holistic view of the entire customer lifecycle to go beyond transactional interactions and establish trust. Organisations are connecting each step in the customer journey to interact and understand prominent needs and gain an exceptional number of improvement opportunities. This is possible by implementing an automated data collection process and creating a universally available data repository for accurate, traceable, and updated information. 


Are enterprises loving managed services?

The reason enterprises want to reduce their network-management burden is difficulty in acquiring and maintaining skilled network-operations specialists. This has been a problem for decades; network-operations specialists have no career paths in most enterprises, so they top out in salary and promotion opportunity. Over half of the 59 enterprises I talked with said that they had a problem retaining a network specialist for more than three years, and 12 said they had problems retaining them for two years. Every enterprise said that it took longer to find qualified network specialists than programmers. ... A close second in terms of managed-service drivers was difficulty in supporting remote sites. The problem with remote network support, said 50 of my managed-service enterprises, is that the best way for diagnosis of network problems at remote sites requires that the network be used to project central technology skills to those locations. Obviously, that's Catch-22 in action. This is one reason why SD-WAN is so often associated with managed services; SD-WAN is all about adding small, remote, sites to the company VPN. 


Windows 365 response shows enterprises are hungry for cloud OS option

While a cloud OS may be attractive to some organizations and users, there will be others that require additional app support that will still require access to a machine with an onboard OS and apps (or at least browser-based access to a different cloud). Many legacy enterprise apps may not run in such an environment and are very unlikely to be migrated. Those users may not be good candidates for a Windows 365 deployment. As a result, I don’t see a cloud OS like Windows 365 becoming the universal (or even dominant) OS anytime soon. The bottom line is, enterprises that are struggling with managing multiple device types (e.g., PC and Macs, Android and iOS, Chromebooks) that need a single access point (and a single license) to apps might find Windows 365 an attractive option over buying multiple licenses and/or managing multiple user device types at substantial costs. Managing a cloud-based OS is far easier than managing installed OS and app combinations. But for most companies, the current limitations of Windows 365, and a need to run many internal mature and legacy apps, will make Windows 365 a future rather than a current option.


How can we trust a digital identity? A security CEO explains…

It's hard to prove digital identity, and many of the current approaches – such as email + password + SMS PIN codes – add complexity for the user without actually addressing the core issue, which is: can these identities be trusted? As I mentioned before, you can have email addresses that represent your identity online. And you could have multiple [email] addresses – for example, it's easy to get Gmail addresses; they're free, and so bad actors can exploit that freedom. You can have multiple accounts created by using those multiple free email addresses, and bad actors can hide behind them either to commit fraud or just to spread fake news. The trick is very much to have some sort of balance between the freedom and the friction; the freedom and that proven identity. ... But if you can make it something that has an associated security factor – and the phone does this, because it's got the SIM card – then you can have that thing which you're willing to share, but at the same time has a proven credential, and that allows you to build trust associated with that.



Quote for the day:

"Leaders make decisions that create the future they desire." -- Mike Murdock

Daily Tech Digest - August 15, 2021

Scientists removed major obstacles in making quantum computers a reality

Spin-based silicon quantum electronic circuits offer a scalable platform for quantum computation. They combine the manufacturability of semiconductor devices with the long coherence times afforded by spins in silicon. Advancing from current few-qubit devices to silicon quantum processors with upward of a million qubits, as required for fault-tolerant operation, presents several unique challenges. One of the most demanding is the ability to deliver microwave signals for large-scale qubit control. ... Completely reimagine the silicon chip structure is the solution to the problem. Scientists started by removing the wire next to the qubits. They then applied a novel way to deliver microwave-frequency magnetic control fields across the entire system. This approach could provide control fields to up to four million qubits. Scientists added their newly developed component called a crystal prism called a dielectric resonator. When microwaves are directed into the resonator, it focuses the wavelength of the microwaves down to a much smaller size.


Agile strategy: 3 hard truths

One of the primary challenges is that leadership can often be a barrier when an organization is seeking to become more agile. According to last year’s Business Agility Report from Scrum Alliance and the Business Agility Institute, this is the most prevalent challenge that agile coaches report. Some reasons for this include a lack of buy-in and support, resistance to change, having a mindset that’s not conducive to agility, a lack of alignment between agile teams and leadership, lack of understanding, and a deeply rooted organizational legacy regarding management styles. Overcoming legacy structures, cultures, and mindsets can be difficult. Some coaches have reported that leaders view agile as being “for their staff” and not for them. Additionally, leaders may have competing priorities – such as retaining control – which can hinder organization-wide adoption of agile methodologies. Any leader considering an agile transformation must understand that in order to succeed, full executive buy-in is needed and that they too will need to change their way of working and thinking.


Custom Rate Limiting for Microservices

API providers use rate limit design patterns to enforce API usage limits on their clients. It allows API providers to offer reliable service to the clients. This also allows a client to control its API consumption. Rate limiting, being a cross-cutting concern, is often implemented at the API Gateway fronting the microservices. There are a number of API Gateway solutions that offer rate-limiting features. In many cases, the custom requirements expected of the API Gateway necessitate developers to build their own API Gateway. The Spring Cloud Gateway project provides a library for developers to build an API Gateway to meet any specific needs. In this article, we will demonstrate how to build an API Gateway using the Spring Cloud Gateway library and develop custom rate limiting solutions. A SaaS provider offers APIs to verify the credentials of a person through different factors. Any organization that utilizes the services may invoke APIs to verify credentials obtained from national ID cards, face images, thumbprints, etc. The service provider may have a number of enterprise customers that have been offered a rate limit - requests per minute, and a quota - requests per day, depending on their contracts.


Google Introduces Two New Datasets For Improved Conversational NLP

Conversational agents are a dialogue system through NLP to respond to a given query in human language. It leverages advanced deep learning measures and natural language understanding to reach a point where conversational agents can transcend simple chatbot responses and make them more contextual. Conversational AI encompasses three main areas of artificial intelligence research — automatic speech recognition (ASR), natural language processing (NLP), and text-to-speech (TTS or speech synthesis). These dialogue systems are utilised to read from the input channel and then reply with the relevant response in graphics, speech, or haptic-assisted physical gestures via the output channel. Modern conversational models often struggle when confronted with temporal relationships or disfluencies.The capability of temporal reasoning in dialogs in massive pre-trained language models like T5 and GPT-3 is still largely under-explored. The progress on improving their performance has been slow, in part, because of the lack of datasets that involve this conversational and speech phenomena.


Addressing the cybersecurity skills gap through neurodiversity

Having a career in cybersecurity typically requires logic, discipline, curiosity and the ability to solve problems and find patterns. This is an industry that offers a wide spectrum of positions and career paths for people who are neurodivergent, particularly for roles in threat analysis, threat intelligence and threat hunting. Neurodiverse minds are usually great at finding the needle in the haystack, the small red flags and minute details that are critical for hunting down and analyzing potential threats. Other strengths include pattern recognition, thinking outside the box, attention to detail, a keen sense of focus, methodical thinking and integrity. The more diverse your teams are, the more productive, creative and successful they will be. And not only can neurodiverse talent help strengthen cybersecurity, employing different minds and perspectives can also solve communication problems and create a positive impact for both your team and your company. According to the Bureau of Labor Statistics, the demand for Information Security Analysts — one of the common career paths for cybersecurity professionals — is expected to grow 31% by 2029, much higher than the average growth rate of 4% for other occupations.


Realizing IoT’s potential with AI and machine learning

Propagating algorithms across an IIoT/IoT network to the device level is essential for an entire network to achieve and keep in real-time synchronization. However, updating IIoT/IoT devices with algorithms is problematic, especially for legacy devices and the networks supporting them. It’s essential to overcome this challenge in any IIoT/IoT network because algorithms are core to AI edge succeeding as a strategy. Across manufacturing floors globally today, there are millions of programmable logic controllers (PLCs) in use, supporting control algorithms and ladder logic. Statistical process control (SPC) logic embedded in IIoT devices provides real-time process and product data integral to quality management succeeding. IIoT is actively being adopted for machine maintenance and monitoring, given how accurate sensors are at detecting sounds, variations, and any variation in process performance of a given machine. Ultimately, the goal is to predict machine downtimes better and prolong the life of an asset.


Understanding and applying robotic process automation

RPA can allow businesses to reallocate their employees, removing them from repetitive tasks and engaging them in projects that support true growth, both for the company and individual. Work were human strengths such as emotional intelligence, reasoning and judgment are required typically bring greater value to the company, and, they’re also often more personally rewarding. This can raise job satisfaction and help retain employees. Further, the ability to reallocate employees can enable a business to apply their useful company knowledge to other value-adding areas, supplement talent gaps and more. Of course, there’s the attraction of being able to do one’s job more efficiently, without manual processes that can make time drag. For instance, let’s say you’re at that same investment firm and there a rapidly growing hedge fund, requiring human resources (HR) to onboard a lot of people fast. Between provisioning accounts, providing access to the right tools, sending out emails and more, there’s a lot of work involved. With a RPA bot, 20 new people could be processed at once, with the HR person monitoring progress through a window on the corner of their screen, which also notifies them if anything needs their attention.


It's time for AI to explain itself

Ultimately, organizations may not have much choice but to adopt XAI. Regulators have taken notice. The European Union's General Data Protection Regulation (GDPR) demands that decisions based on AI be explainable. Last year, the U.S. Federal Trade Commission issued stringent guidelines around how such technology should be used. Companies found to have bias embedded in their decision-making algorithms risk violating multiple federal statutes, including the Fair Credit Reporting Act, the Equal Credit Opportunity Act, and antitrust laws. "It is critical for businesses to ensure that the AI algorithms they rely on are explainable to regulators, particularly in the antitrust and consumer protection space," says Dee Bansal, a partner at Cooley LLP, which specializes in antitrust litigation. "If a company can't explain how its algorithms work [and] the contours of the data on which they rely … it risks being unable to adequately defend against claims regulators may assert that [its] algorithms are unfair, deceptive, or harm competition." It's also just a good idea, notes James Hodson, CEO of the nonprofit organization AI for Good.


AI ethics in the real world: FTC commissioner shows a path toward economic justice

The value of a machine learning algorithm is inherently related to the quality of the data used to develop it, and faulty inputs can produce thoroughly problematic outcomes. This broad concept is captured in the familiar phrase: "Garbage in, garbage out." The data used to develop a machine-learning algorithm might be skewed because individual data points reflect problematic human biases or because the overall dataset is not adequately representative. Often skewed training data reflect historical and enduring patterns of prejudice or inequality, and when they do, these faulty inputs can create biased algorithms that exacerbate injustice, Slaughter notes. She cites some high-profile examples of faulty inputs, such as Amazon's failed attempt to develop a hiring algorithm driven by machine learning, and the International Baccalaureate's and UK's A-Level exams. In all of those cases, the algorithms introduced to automate decisions kept identifying patterns of bias in the data used to train them and attempted to reproduce them. ... " 


How to navigate technology costs and complexity with enterprise architecture

The modern business world is increasingly driven by technology. As we move to a more interconnected and complex environment, the demand for suitable technologies is increasing – this is so much so that an average enterprise pays for approximately 1,516 applications. With a shift to remote working, we’re also seeing an overwhelming imperative to migrate to the cloud, and today, application costs are estimated to make up 80 per cent of the entire IT budget. Industry analyst Gartner has even forecasted that worldwide IT spending will reach $4 trillion in 2021. The modern chief information officer (CIO) is responsible for understanding these technology costs and bringing them under control – and a key enabler of this is enterprise architecture (EA). By providing a strategic view of change, EA ensures alignment of the business and IT operations, facilitating agility, speed and the ability to make real-time decisions based on reliable and consistent data. So, what are the common challenges of spiralling technology costs and how can EA help to reduce this pressure for CIOs?



Quote for the day:

“Patience is the calm acceptance that things can happen in a different order than the one you have in mind.” -- David G. Allen

Daily Tech Digest - August 14, 2021

Embedded finance won’t make every firm into a fintech company

One fintech’s choices on these matters may be completely different from another if they address different segments — it all boils down to tradeoffs. For example, deciding on which data sources to use and balancing between onboarding and transactional risk look different if optimizing for freelancers rather than larger small businesses. In contrast, third-party platform providers must be generic enough to power a broad range of companies and to enable multiple use cases. While the companies partnering with these services can build and customize at the product feature level, they are heavily reliant on their platform partner for infrastructure and core financial services, thus limited to that partner’s configurations and capabilities. As such, embedded platform services work well to power straightforward commoditized tasks like credit card processing, but limit companies’ ability to differentiate on more complex offerings, like banking, which require end-to-end optimization. More generally and from a customer’s perspective, embedded fintech partnerships are most effective when providing confined financial services within specific user flows to enhance the overall user experience.


Company size is a nonissue with automated cyberattack tools

As mentioned earlier, cybercriminals will change their tactics to derive the most benefit and least risk to themselves. Dark-side developers are helping matters by creating tools that require minimal skill and effort to operate. "Ransomware as a Service (RaaS) has revolutionized the cybercrime industry by providing ready-made malware and even a commission-based structure for threat actors who successfully extort a company," explains Little. "Armed with an effective ransomware starter pack, attackers cast a much wider net and make nearly every company a target of opportunity." A common misconception related to cyberattacks is that cybercriminals operate by targeting individual companies. Little suggests cyberattacks on specific organizations are becoming rare. With the ability to automatically scan large chunks of the internet for vulnerable computing devices, cybercriminals are not initially concerned about the company. ... Little is very concerned about a new bad-guy tactic spreading quickly — automated extortion. The idea being once the ransomware attack is successful, the victim is threatened and coerced automatically.


Paying with a palm print? We’re victims of our own psychology in making privacy decisions

Unfortunately we’re victims of our own psychology in this process. We will often say we value our privacy and want to protect our data, but then, with the promise of a quick reward, we will simply click on that link, accept those cookies, login via Facebook, offer up that fingerprint and buy into that shiny new thing. Researchers have a name for this: the privacy paradox. In survey after survey, people will argue that they care deeply about privacy, data protection and digital security, but these attitudes are not supported in their behaviour. Several explanations exist for this, with some researchers arguing that people employ a privacy calculus to assess the costs and benefits of disclosing particular information. The problem, as always, is that certain types of cognitive or social bias begin to creep into this calculus. We know, for example, that people will underestimate the risks associated with things they like and overestimate the risks associated with things they dislike.


Ransomware Payments Explode Amid ‘Quadruple Extortion’

“While it’s rare for one organization to be the victim of all four techniques, this year we have increasingly seen ransomware gangs engage in additional approaches when victims don’t pay up after encryption and data theft,” Unit 42 reported. “Among the dozens of cases that Unit 42 consultants reviewed in the first half of 2021, the average ransom demand was $5.3 million. That’s up 518 percent from the 2020 average of $847,000,” researchers observed. More statistics include the highest ransom demand of a single victim spotted by Unit 42, which rose to $50 million in the first half of 2021, up from $30 million last year. So far this year, the largest payment confirmed by Unit 42 was the $11 million that JBS SA disclosed after a massive attack in June. Last year, the largest payment Unit 42 observed was $10 million. Barracuda has also tracked a spike in ransom demands: In the attacks that it’s observed, the average ransom ask per incident was more than $10 million, with only 18 percent of the incidents involving a ransom demand of less than that.


How a Simple Crystal Could Help Pave the Way to Full-scale Quantum Computing

For more than two decades global control in quantum computers remained an idea. Researchers could not devise a suitable technology that could be integrated with a quantum chip and generate microwave fields at suitably low powers. In our work we show that a component known as a dielectric resonator could finally allow this. The dielectric resonator is a small, transparent crystal which traps microwaves for a short period of time. The trapping of microwaves, a phenomenon known as resonance, allows them to interact with the spin qubits longer and greatly reduces the power of microwaves needed to generate the control field. This was vital to operating the technology inside the refrigerator. In our experiment, we used the dielectric resonator to generate a control field over an area that could contain up to four million qubits. The quantum chip used in this demonstration was a device with two qubits. We were able to show the microwaves produced by the crystal could flip the spin state of each one.


How To Transition from a Data Analyst into a Data Scientist

What do you want to be – a data analyst or a data scientist? Do you need such a transition? Why do you need this shift of being a data scientist? The most important question that might haunt most analysts would be ‘how do you want to see your career graph grow?’ This is where the big difference comes in. With a choice of path that will make you a data scientist, your career becomes more challenging with new possibilities to design learning models which will set your skills apart from the herd. Keep aside time to study research papers by prominent data scientists. Most of these will be readily available on the internet free of cost. Find your areas of interest and subjects of your inclination in the field, and take notes. When you spend large sections of your time understanding data science, you must validate your learning with facts. You will find such facts when you read the works of prominent computer and data scientists like Geoffrey Hinton, Rachel Thomas, and Andrew Ng, among many established experts who contributed to data science with their studies in ML, neural networks, and tools for designing models.


Philips study finds hospitals struggling to manage thousands of IoT devices

Hospital cybersecurity has never been more crucial. An HHS report found that there have been at least 82 ransomware incidents worldwide this year, with 60% of them specifically targeting US hospital systems. Azi Cohen, CEO of CyberMDX, noted that hospitals now have to deal with patient safety, revenue loss and reputational damage when dealing with cyberattacks, which continue to increase in frequency. Almost half of hospital executives surveyed said they dealt with a forced or proactive shutdown of their devices in the last six months due to an outside attack. Mid-sized hospital systems struggled mightily with downtime from medical devices. Large hospitals faced an average shutdown time of 6.2 hours and a loss of $21,500 per hour. But the numbers were far worse for mid-sized hospitals, whose IT directors reported an average of 10 hours of downtime and losses of $45,700 per hour. "No matter the size, hospitals need to know about their security vulnerabilities," said Maarten Bodlaender, head of cybersecurity services at Philips.


Does it Matter? Smart home standard is delayed until 2022

Richardson said that one big reason for the delay is that the software development kit (SDK) needs more work. He also stressed that with most standards-setting efforts, the goal is to deliver a specification, not a functioning SDK that developers can implement to test and use to build products. This is true. There is a world of difference between functioning software and a written spec. A developer working on Matter who didn’t want to be named told me he wasn’t surprised by the delay, and thought it might actually help smaller companies, because it gives them more time to work with the specification and meet the product launches expected from Amazon, Google, and Apple with more fully developed products of their own. He also added that he thought the SDK performed well in a controlled environment, but still needed more work. I was less convinced by the CSA’s argument that adding more companies to the working group (back in May there were 180 members and now there are 209) had caused delays. By that logic, we may never see a standard. 


Methods for Saving and Integrating Legacy Data

The IT person tells management the legacy database has maybe another month before it completely crashes. This is bad news for management. The database has a huge amount of valuable data that needs to be transferred somewhere for purposes of storage, until a solution for transforming and transferring the legacy data to the new system can be found. Simply losing the data, which contains information that must be saved for legal reasons, and/or contains valuable customer information, would damage profits, and is unacceptable. Two options for saving the legacy data in an emergency are: 1) transforming the files into a generalized format (such as PDF, Excel, TXT) and storing the new, readable files in the new database, and 2) transferring the legacy data to a VM copy of the legacy database, which is supported by a cloud. Thomas Griffin, of the Forbes Technology Council, wrote “The first step I would take is to move all data to the cloud so you’re not trapped by a specific technology. Then you can take your time researching the new technology. Find out what competitors are using, and read to see what tools are trending in your industry.”


Is Your Current Cybersecurity Strategy Right for a New Hybrid Workforce?

To support a secure and productive hybrid workforce, enterprises need a technology platform that scales and adapts to their changing business requirements. This requires adopting a modular approach to support hybrid workers that include integrating zero trust network access (ZTNA) for access to private or on-premises applications, a multi-mode cloud access security broker (CASB) for all types of cloud services and web security on-device to protect user privacy. Securing corporate data on managed and BYOD devices are critical for businesses with hybrid workforces. ZTNA surmounts the challenges associated with VPN and provides greater protection. It uses the zero-trust principle of least privilege to give authorised users secure access to specific resources one at a time. This is accomplished through identity and access management (IAM) capabilities like single sign-on (SSO) and multi-factor authentication (MFA), as well as contextual access control.



Quote for the day:

"Leadership involves finding a parade and getting in front of it." - John Naisbitt

Daily Tech Digest - August 13, 2021

7 ways to harden your environment against compromise

Running legacy operating systems increases your vulnerability to attacks that exploit long-standing vulnerabilities. Where possible, look to decommission or upgrade legacy Windows operating systems. Legacy protocols can increase risk. Older file share technologies are a well-known attack vector for ransomware but are still in use in many environments. In this incident, there were many systems, including Domain Controllers, that hadn’t been patched recently. This greatly aided the attacker in their movement across the environment. As part of helping customers, we look at the most important systems and make sure we are running the most up-to-date protocols that we can to further enhance an environment. As the saying goes, “collection is not detection.” On many engagements, the attacker’s actions are clear and obvious in event logs. The common problem is no one is looking at them on a day-to-day basis or understanding what normal looks like. Unexplained changes to event logs, such as deletion or retention changes, should be considered suspicious and investigated.


Robocorp Makes Remote Process Automation Programmable

Robocorp Lab creates a separate Conda environment for each of your robots, keeping your robot and its dependencies isolated from the other robots and dependencies on your system. That enables you to control the exact versions of the dependencies you need for each of your robots. It offers RCC, a set of tools that allows you to create, manage, and distribute Python-based self-contained automation packages and the robot.yaml configuration file for building and sharing automations. Control Room provides a dashboard to centrally control and monitor automations across teams, target systems or clients. It offers the ability to scale with security, governance, and control. There are two options for Control Room: a cloud version and a self-managed version for private cloud or on-premises deployment. The platform allows users to write extensions or customizations in Python, a limitation with proprietary systems, according to Karjalainen, and to extend automations with third-party tools for AI, machine learning, optical character recognition or natural language understanding.


How Your Application Architecture Has Evolved

Distributed infrastructure on the cloud is great but there is one problem. It is very unpredictable and difficult to manage compared to a handful of servers in your own data center. Running an application in a robust manner on distributed cloud infrastructure is no joke. A lot of things can go wrong. An instance of your application or a node on your cluster can silently fail. How do you make sure that your application can continue to run despite these failures? The answer is microservices. A microservice is a very small application that is responsible for one specific use-case, just like in service-oriented architecture but is completely independent of other services. It can be developed using any language and framework and can be deployed in any environment whether it be on-prem or on the public cloud. Additionally, they can be easily run in parallel on a number of different servers in different regions to provide parallelization and high availability.


Satellites Can Be a Surprisingly Great Option for IoT

IoT technologies tend to have a few qualities in common. They're designed to be low-power, so that the batteries on IoT devices aren't sapped with every transmission. They also tend to be long-ranging, to cut down on the amount of other infrastructure required to deploy a large-scale IoT project. And they're usually fairly robust against interference, because if there are dozens, hundreds, or even thousands of devices transmitting, messages can't afford to be garbled by one another. As a trade-off, they typically don't support high data rates, which is a fair concession to make for many IoT networks' smart metering needs. ... Advancements in satellites are only accelerating the possibilities opened up by putting IoT technologies into orbit. Chief among those advancements is the CubeSat revolution, which is both shrinking and standardizing satellite construction. "We designed all the satellites when we were four people, and by the time we launched, we were about 10 people," says Longmier. "And that wasn't possible five years before we started."


Tech giants unite to drive ‘transformational’ open source eBPF projects

“It will be the responsibility of the eBPF Foundation to validate and certify the different runtime implementations to ensure portability of applications. Projects will remain independently governed, but the foundation will provide access to resources to foster all projects and organize maintenance and further development of the eBPF language specification and the surrounding supporting projects.” The new foundation serves as further evidence that open source is now the accepted model for cross-company collaboration, playing a major part in bringing the tech giants of the world together. Sarah Novotny, Microsoft’s open source lead for the Azure Office of the CTO, recently said that open source collaboration projects can enable big companies to bypass much of the lawyering to join forces in weeks rather than months. “A few years ago if you wanted to get several large tech companies together to align on a software initiative, establish open standards, or agree on a policy, it would often require several months of negotiation, meetings, debate, back and forth with lawyers … and did we mention the lawyers?” she said. “Open source has completely changed this.”


The Importance of Properly Scoping Cloud Environments

A CSP should be viewed as a partner in protecting payment data rather than the common assumption that all responsibility has been completely outsourced. The use of a CSP for payment security related services does not relieve an organization of the ultimate responsibility for its own security obligations, or for ensuring that its payment data and payment environment are secure. Much of this misunderstanding comes from simply not including payment data security as part of the conversation and how requirements, such as those in PCI DSS, will be met. ... Third-Party Service Provider Due Diligence: When selecting a CSP, organizations should vet CSP candidates through careful due diligence prior to establishing a relationship and explicit understanding of which entity will assume management and oversight of security. This will assist organizations in reviewing and selecting CSPs with the skills and experience appropriate for the engagement. 


The Difference Between Data Scientists and ML Engineers

The majority of the work performed by Data Scientists is in the research environment. In this environment, Data Scientists perform tasks to better understand the data so they can build models that will best capture the data’s inherent patterns. Once they’ve built a model, the next step is to evaluate whether it meets the project's desired outcome. If it does not, they will iteratively repeat the process until the model meets the desired outcome before handing it over to the Machine Learning Engineers. Machine Learning Engineers are responsible for creating and maintaining the Machine Learning infrastructure that permits them to deploy the models built by Data Scientists to a production environment. Therefore, Machine Learning Engineers typically work in the development environment which is where they are concerned with reproducing the machine learning pipeline built by Data Scientists in the research environment. And, they work in the production environment which is where the model is made accessible to other software systems and/or clients.


A remedial approach to destructive IoT hacks

Automating security is critical to scaling IoT technologies without the need to scale headcount to secure them. To keep up with manual inventory, patching and credential management of just one device it takes 4 man-hours per year. If an organization has 10,000 devices, that nets out to 40,000 man-hours per year to keep those devices secure. This is an impossible number of working hours unless the business has a staff of 20 dedicated to the cause. To continuously secure the thousands, or even tens of thousands, of devices on an organization’s networks, automation is necessary. With the mass scale of IoT devices and the opportunities to strike in every office and facility, automated identification, and inventory of each device so that security teams can understand how it communicates with other devices, systems and applications, and which people have access to it is crucial. Once identified, automation technology allows for policy compliance and enforcement by patching firmware and updating passwords, defending your IoT as thoroughly as your other endpoints.

Malicious Docker Images Used to Mine Monero

These malicious containers are designed to easily be misidentified as official container images, even though the Docker Hub accounts responsible for them are not official accounts. "Once they are running, they may look like an innocent container. After running, the binary xmrig is executed, which hijacks resources for cryptocurrency mining," the researchers note. Morag says social engineering techniques could be used to trick someone into using these container images. "I guess you will never log in to the webpage mybunk[.]com, but if the attacker sent you a link to this namespace, it might happen," he says. "The fact is that these container images accumulated 10,000-plus pulls, each." While it is unclear who’s behind the scheme, the Aqua Security researchers found that the malicious Docker Hub account was taken down after Docker was notified by Aqua Security, according to the report. Morag explains that these containers are not directly controlled by a hacker, but there's a script at entrypoint/cmd that is aimed to execute an automated attack. In this case, the attacks were limited to hijacking computing resources to mine cryptocurrency.


Leveraging the Agile Manifesto for More Sustainability

Often the first thing that comes to mind is the “sustainable pace,” as pointed out by the 8th principle of the Agile Manifesto: “Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.” So, sustainability in this sense will ensure people will not be burned out by an insane deadline. Instead, a sustainable pace ensures a delivery speed that can be kept up for an infinite time. This understanding of sustainability falls into the profit perspective of the triple bottom line. Another way sustainability is often understood in the agile community is by focusing on sustaining agility in companies. This means, agility and/or agile development will govern the work even after, for example, external consultants and trainers are gone. The focus is then on how to build a sustainable agile culture or on sustainable agile transformations. Over all these years, the agile manifesto has served me well in providing guidance, even for areas it hasn’t originally been defined for. 



Quote for the day:

"Leaders dig into their business to learn painful realities rather than peaceful illusion." -- Orrin Woodward