Daily Tech Digest - July 31, 2023

The open source licensing war is over

Too many open source warriors think that the license is the end, rather than just a means to grant largely unfettered access to the code. They continue to fret about licensing when developers mostly care about use, just as they always have. Keep in mind that more than anything else, open source expands access to quality software without involving the purchasing or (usually) legal teams. This is very similar to what cloud did for hardware. The point was never the license. It was always about access. Back when I worked at AWS, we surveyed developers to ask what they most valued in open source leadership. You might think that contributing code to well-known open source projects would rank first, but it didn’t. Not even second or third. Instead, the No. 1 criterion developers used to judge a cloud provider’s open source leadership was that it “makes it easy to deploy my preferred open source software in the cloud.” ... One of the things we did well at AWS was to work with product teams to help them discover their self-interest in contributing to the projects upon which they were building cloud services, such as Elasticache.

Navigate Serverless Databases: A Guide to the Right Solution

One of the core features of Serverless is the pay-as-you-go pricing. Almost all Serverless databases attempt to address a common challenge: how to provision resources economically and efficiently under uncertain workloads. Prioritizing lower costs may mean consuming fewer resources. However, in the event of unexpected spikes in business demand, you may have to compromise user experience and system stability. On the other hand, more generous and secure resource provisioning leads to resource waste and higher costs. Striking a balance between these two styles requires complex and meticulous engineering management. This would divert your focus from the core business. Furthermore, the Pay-as-you-go billing model has varying implementations in different Serverless products. Most Serverless products offer granular billing based on storage capacity and read/write operations per unit. This is largely possible due to the distributed architecture that allows finer resource scaling. 

Building a Beautiful Data Lakehouse

It’s common to compensate for the respective shortcomings of existing repositories by running multiple systems, for example, a data lake, several data warehouses, and other purpose-built systems. However, this process frequently creates a few headaches. Most notably, data stored in one repository type is often excluded from analytics run on another, which is suboptimal in terms of the results. In addition, having multiple systems requires the creation of expensive and operationally burdensome processes to move data from lake to warehouse if required. To overcome the data lake’s quality issues, for example, many often use extract/transform/load (ETL) processes to copy a small subset of data from lake to warehouse for important decision support and BI applications. This dual-system architecture requires continuous engineering to ETL data between the two platforms. Each ETL step risks introducing failures or bugs that reduce data quality. Second, leading ML systems, such as TensorFlow, PyTorch, and XGBoost, don’t work well on data warehouses. 

How the best CISOs leverage people and technology to become superstars

Exemplary CISOs are also able to address other key pain points that traditionally flummox good cybersecurity programs, such as the relationships between developers and application security (AppSec) teams, or how cybersecurity is viewed by other C-suite executives and the board of directors. For AppSec relations, good CISOs realize that developer enablement helps to shift security farther to the so-called left and closer to a piece of software’s origins. Fixing flaws before applications are dropped into production environments is important, and much better than the old way of building code first and running it past the AppSec team at the last minute to avoid those annoying hotfixes and delays to delivery. But it can’t solve all of AppSec’s problems alone. Some vulnerabilities may not show up until applications get into production, so relying on shifting left in isolation to catch all vulnerabilities is impractical and costly. There also needs to be continuous testing and monitoring in the production environment, and yes, sometimes apps will need to be sent back to developers even after they have been deployed. 

TSA Updates Pipeline Cybersecurity Directive to Include Regular Testing

The revised directive, developed with input from industry stakeholders and federal partners including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Transportation, will “continue the effort to reinforce cybersecurity preparedness and resilience for the nation’s critical pipelines”, the TSA said. Developed with input from industry stakeholders and federal partners, including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Transportation, the reissued security directive for critical pipeline companies follows the initial directive announced in July 2021 and renewed in July 2022. The TSA said that the requirements issued in the previous years remain in place. According to the 2022 security directive update, pipeline owners and operators are required to establish and execute a TSA-approved cybersecurity implementation plan with specific cybersecurity measures, and develop and maintain a CIRP that includes measures to be taken during cybersecurity incidents. 

What is the cost of a data breach?

"One particular cost that continues to have a major impact on victim organizations is theft/loss of intellectual property," Glenn J. Nick, associate director at Guidehouse, tells CSO. "The media tend to focus on customer data during a breach, but losing intellectual property can devastate a company's growth," he says. "Stolen patents, engineering designs, trade secrets, copyrights, investment plans, and other proprietary and confidential information can lead to loss of competitive advantage, loss of revenue, and lasting and potentially irreparable economic damage to the company." It's important to note that how a company responds to and communicates a breach can have a large bearing on the reputational impact, along with the financial fallout that follows, Mellen says. "Understanding how to maintain trust with your consumers and customers is really, really critical here," she adds. "There are ways to do this, especially around building transparency and using empathy, which can make a huge difference in how your customers perceive you after a breach. If you try to sweep it under the rug or hide it, then that will truly affect their trust in you far more than the breach alone."

Meeting Demands for Improved Software Reliability

“Developers need to fix bugs, address performance regressions, build features, and get deep insights about particular service or feature level interactions in production,” he says. That means they need access to necessary data in views, graphs, and reports that make a difference to their workflows. “However, this data must be integrated and aligned with IT operators to ensure teams are working across the same data sets,” he says. Sigelman says IT operations is a crucial part of an organization’s overall reliability and quality posture. “By working with developers to connect cloud-native systems such as Kubernetes with traditional IT applications and systems of record, the entire organization can benefit from a centralized data and workflow management pane,” he says. From this point, event and change management can be combined with observability instruments, such as service level objectives, to provide not only a single view across the entire IT estate, but to demonstrate the value of reliability to the entire organization.

How will artificial intelligence impact UK consumers lives?

In the next five years, I expect we may see a rise in new credit options and alternatives, such as “Predictive Credit Cards,” where AI anticipates a consumer’s spending needs based on their past behaviour and adjusts the credit limit or offers tailored rewards accordingly. Additionally, fintechs are likely to integrate Large Language Models (LLMs) and add AI to digital and machine-learning powered services. ... Through AI, consumers may also be able to access a better overview of their finances, specifically personalised financial rewards, as they would have access to tools to review all transactions, receive recommendations on personalised spend-based rewards, and even benchmark themselves against other cardholders in similar demographics or industry standards. Consumers may also be able to ask questions and get answers at the click of a button, for example, ‘How much debt do I have compared to your available credit limits?’ or ‘What’s the best way to use my rewards points based on my recent purchases?’ improving financial literacy and potentially providing them with more spending/saving power and personalised experiences in the long run.

IT Strategy as an Enterprise Enabler

IT Strategy is a plan to create an Information Technology capability for maximizing the business value for the organization. IT capability is the Organization ability to meet business needs and improve business processes using IT based systems. The Objective of IT strategy is to spend least amount of resources and generates better ROI. It helps in setting the direction for an IT function in an organization. A successful IT strategy helps the organizations to reduce the operational bottlenecks, realize TCO and derive value from technology. ... IT Strategy definition and implementation covers the key aspects of technology management, planning, governance, service management, risk management, cost management, human resource management, hardware and software management, and vendor management. Broadly, IT Strategy has 5 phases covering Discovery, Assess, Current IT, Target IT and Roadmap. Idea of IT Strategy is to keep the annual and multiyear plan usual, insert the regular frequent check-ins along the way. Revisit IT Strategy for every quarterly or every 6 months to ensure that optimal business value created. 

AI system audits might comply with local anti-bias laws, but not federal ones

"You shouldn’t be lulled into false sense of security that your AI in employment is going to be completely compliant with federal law simply by complying with local laws. We saw this first in Illinois in 2020 when they came out with the facial recognition act in employment, which basically said if you’re going to use facial recognition technology during an interview to assess if they’re smiling or blinking, then you need to get consent. They made it more difficult to do [so] for that purpose. "You can see how fragmented the laws are, where Illinois is saying we’re going to worry about this one aspect of an application for facial recognition in an interview setting. ... "You could have been doing this since the 1960s, because all these tools are doing is scaling employment decisions. Whether the AI technology is making all the employment decisions or one of many factors in an employment decision; whether it’s simply assisting you with information about a candidate or employer that otherwise you wouldn’t have been able to ascertain without advanced machine learning looking for patterns that a human couldn’t have fast enough.

Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford

Daily Tech Digest - July 30, 2023

What Is Data Strategy and Why Do You Need It?

Developing a successful Data Strategy requires careful consideration of several key steps. First, it is essential to identify the business goals and objectives that the Data Strategy will support. This will help determine what data is needed and how it should be collected, analyzed, and used. Next, it is important to assess the organization’s current data infrastructure and capabilities. This includes evaluating existing databases, data sources, tools, and processes for collecting and managing data. It also involves identifying current gaps in skills or technology that need to be addressed. Once these foundational elements are in place, organizations can begin to define their approach to Data Governance. This involves establishing policies and procedures for managing Data Quality, security, privacy, compliance, and access. It may also involve developing a framework for decision-making that ensures the right people have access to the right information at the right time. Finally, organizations should consider how they will measure success in implementing their Data Strategy. 

Battling Technical Debt

Technical debt costs you money and takes a sizable chunk of your budget. For example, a 2022 Q4 survey by Protiviti found that, on average, an organization invests more than 30% of its IT budget and more than 20% of its overall resources in managing and addressing technical debt. This money is being taken away from building new and impactful products and projects, and it means the cash might not be there for your best ideas. ... Technical debt impacts your reputation. The impact can be huge and result in unwanted media attention and customers moving to your competitors. In an article about technical debt, Denny Cherry attributes performance woes by US airline Southwest Airlines to poor investment in updating legacy equipment, which caused difficulties with flight scheduling as a result of "outdated processes and outdated IT." If you can't schedule a flight, you're going to move elsewhere. Furthermore, in many industries like aviation, downtime results in crippling fines. These could be enough to tip a company over the edge.

‘Audit considerations for digital assets can be extremely complex’

Common challenges when auditing crypto assets include understanding and evaluating controls over access to digital keys, reconciliations to the blockchain to verify existence of assets, considerations around service providers in terms of qualifications, availability and scope, and forms of reporting, among others. As the technology is rapidly evolving, the regulatory standards do not yet capture all crypto offerings. Everyone is operating in an uncertain regulatory environment, where the speed of change is significant for all participants. If you take accounting standards, for example, a common discussion today is how to measure these assets. Under IFRS, crypto assets are generally recognized as an intangible asset and recorded at cost. While this aligns with the technical requirements of the standards, it sometimes generates financial reporting that may not be well understood by users of the financial information who may be looking for the fair value of these assets.

Does AI have a future in cyber security? Yes, but only if it works with humans

One technique that has been around for a while is rolling AI technology into security operations, especially to manage repeating processes. What the AI does is filter out the noise, identifies priority alerts and screens these out. The other thing it is capable of is capturing this data and being able to look for any anomalies and joining the dots. Established vendors are already providing capabilities like this. Here at Nominet, we have masses of data coming into our systems every day, and being able to look at correlations to identify malicious and anomalous behaviour is very valuable. But once again we find ourselves in the definition trap. Being alerted when rules are triggered is moving towards ML, not true AI. But if we could give the system the data and ask it to find us what looked truly anomalous, that would be AI. Organisations might get tens of thousands of security logs at any point in time. Firstly, how do you know if these logs show malicious activity and if so, what is the recommended course of action? 

Moody’s highlights DLT cyber risks for digital bonds

The body of the paper warns of the cyber risks of smaller public blockchains, which are less decentralized and hence more vulnerable to attacks. It considers private DLTs are more secure than similar (small) sized public blockchains because they have greater access controls. Moody’s acknowledges that larger Layer 1 public blockchains such as Ethereum are far harder to attack, but upgrades to the network carry risks. A major challenge is the safeguarding of private keys. In reality the most significant risks relate to the platforms themselves, bugs in smart contracts and oracles which introduce external data. It notes that currently many solutions don’t have cash on ledger, which reduces the attack surface. In reality this makes them less attractive to attack. As cash on ledger becomes more widespread, this enables greater automation. Manipulating smart contract weaknesses could result in unintended payouts and other vulnerabilities. Moody’s specifically mentions the risks associated with third party issuance platforms such as HSBC Orion, DBS, and Goldman Sachs’ GS DAP.

Cyber Resilience Act: EU Regulators Must Strike the Right Balance to Avoid Open Source Chilling Effect

The good news is that developers are willing to work with regulators in fine-tuning the act. And why not get them involved? They know the industry, count deep insights into prevailing processes and fully grasp the intricacies of open source. Additionally, open source is too lucrative and important to ignore. One suggestion is to clarify the wording. For example, replace “commercial activity” with “paid or monetized product.” This will go some way to narrowing the act’s scope and ensuring that open-source projects are not unnecessarily targeted. Another is differentiating between market-ready software products and stand-alone components, ensuring that requirements and obligations are appropriately tailored. Meanwhile, regulators can provide funding in the legislation to actively support open source. For example, Germany grants resources to support developers in maintaining open-source software projects of strategic importance. A similar sovereign tech fund could prove instrumental in supporting and protecting the industry across the continent.

Organizational Resilience And Operating At The Speed Of AI

The challenge becomes—particularly for mid-market organizations that may not have the resources of their larger competitors—how to corral resources to ensure they can effectively incorporate AI. If businesses are to achieve the kind of organizational resilience that is necessary to build sustainable enterprises, they must accept that AI and automation will fundamentally change company structures, culture and operations. Much of this will require investment in “intangible goods, such as business processes and new skills,” as suggested in the Brookings Institute article, but I would like to add one additional imperative: data gravity. ... To operate at the speed of AI, systems must be able to access all the information within an organization’s disparate IT infrastructure. That data must be secure, have integrity and be without bias. AI requires data agility. Therefore, organizations should employ a data gravity strategy whereby all the data within an organization is consolidated into a central hub, creating a single view of all the information. 

As Ransomware Monetization Hits Record Low, Groups Innovate

With ransomware profits in decline, groups have been exploring fresh strategies to drive them back up. While groups such as Clop have shifted tactics away from ransomware to data theft and extortion, other groups have been targeting larger victims, seeking bigger payouts. Some affiliates have been switching ransomware-as-a-service provider allegiance, with many Dharma and Phobos business partners adopting a new service named 8Base, Coveware says. Numerous criminal groups continue to wield crypto-locking malware. The most number of successful attacks it saw during the second quarter involved either BlackCat or Black Basta ransomware, followed by Royal, LockBit 3.0, Akira, Silent Ransom and Cactus. One downside of crypto-locking malware is that attacks designed to take down the largest possible victims, in pursuit of the biggest potential ransom payment, typically demand substantial manual effort, including hands on keyboard time. Groups may also need to purchase stolen credentials for the target from an initial access broker, pay penetration testing experts or share proceeds with other affiliates.

How Indian organisations are keeping pace with cyber security

Jonas Walker, director of threat intelligence at Fortinet, said the digitisation of retail and the rise of e-commerce makes those sectors susceptible to payment card data breaches, supply chain attacks and attacks targeting customer information. “Educational institutions also hold a wealth of personal information, including student and faculty data, making them attractive targets for data breaches and identity theft,” he added. But enterprises in India are not about to let the bad actors get their way. Sakra World Hospital, for example, has segmented its networks and implemented role-based access, endpoint detection and response, as well as zero-trust capabilities for its internal network. It also conducts vulnerability assessments and penetration tests to secure its external assets. “Zero-trust should be implemented on your external security appliances as well,” he added. “The notification system should be strong and prompt so that action can be taken immediately to mitigate any cyber security risk.”

How Can Blockchain Lead to a Worldwide Economic Boom?

The inherent trustworthiness of distributed ledgers is a key factor here in that they greatly enhance critical economic drivers like supply chain management, land ownership, and the distribution of government and non-government services. At the same time, blockchain’s support of digital currencies provides greater access to capital, in large part by side-stepping the regulatory frameworks that govern sovereign currencies. And perhaps most importantly, blockchain helps to stymie public corruption and the diversion of funds away from their intended purpose, which allows capital and profits to reach those who have earned them and will put them to more productive uses. None of this should imply that blockchain will put the entire world on easy streets. Significant challenges remain, not the least of which is the cost to establish the necessary infrastructure to support secure digital ledgers. Multiple hardened data centers are required to prevent hacking, along with high-speed networks to connect them.

Quote for the day:

"Leadership is a privilege to better the lives of others. It is not an opportunity to satisfy personal greed." -- Mwai Kibaki

Daily Tech Digest - July 29, 2023

A New Paradigm In Cybersecurity

Currently, many enterprises still focus on tower-based security (that is, securing the “home”)—securing networks, datacenter, databases, endpoints, applications and middleware and then applying horizontal security solutions for rights management, policy, access management and authentication. The overarching framework creates a secure environment, but fails to prevent breaches by insiders or by external threat actors that have seized credentials or exploited the hundreds of vulnerabilities in every domain at any given time. The crown jewels—specifically data representing client information, contracts, legal documents, employee information, KYC files and other pertinent information—must remain secure and private, even if the credentials are breached, or vulnerabilities lead to breach in the networks, data centers or endpoints. Particularly in this era of increased cyber risk, along with growing reliance on data across every industry, enterprise leaders should consider this level of security and privacy, and this ease of access, as non-negotiable for keeping their valuable data and digital assets safe.

EU opens Microsoft antitrust investigation into Teams bundling

The European Commission will now carry out an in-depth investigation into whether Microsoft may have breached EU competition rules by tying or bundling Microsoft Teams to its Office 365 and Microsoft 365 productivity suites. “Remote communication and collaboration tools like Teams have become indispensable for many businesses in Europe,” explains Margrethe Vestager, executive vice-president in charge of competition policy at the European Commission. “We must therefore ensure that the markets for these products remain competitive, and companies are free to choose the products that best meet their needs. This is why we are investigating whether Microsoft’s tying of its productivity suites with Teams may be in breach of EU competition rules.” Microsoft has responded to the EU’s complaint. “We respect the European Commission’s work on this case and take our own responsibilities very seriously,” says Microsoft spokesperson Robin Koch, in a statement to The Verge.

How to define your ideal embedded build system

You might think that your software stack has nothing to do with your build system; However, the build configurations you select may dictate how your software is organized. After all, building to simulate your application code shouldn’t include low-level target drivers. In fact, you may find that even the middleware you use is entirely different! Defining your ideal build configurations may impact your software stack and vice versa. A modern embedded software stack will include several layers of independent software that are glued together through HALs and API’s. ... Today, are you using your ideal build system? Do you even know what that ideal build system looks like and the benefits it creates for your team? I highly recommend that you take a few minutes to answer these questions. If you don’t have an ideal build system you’re trying to reach, you can easily define your ideal system in 30 minutes or less. Once you have your ideal build system, look at where your build system is today. If it’s not your ideal system, don’t fret! Define some simple goals with deadlines and work on creating your ideal build system.

The Future of the Enterprise Cloud Is Multi-Architecture Infrastructure

Data centers and the cloud grew up on a single, monolithic approach to computing — one size fits all. That worked in the days when workloads were relatively few and very straightforward. But as cloud adoption exploded, so too have the number and types of workloads that users require. A one-size-fits-all environment just isn’t flexible enough for users to be able to run the types of workloads they want in the most effective and cost-efficient manner. Today, technologies have emerged to overthrow the old paradigm and give developers and cloud providers what they need: flexibility and choice. One of its manifestations is multi-architecture, the ability of a cloud platform or service to support more than one legacy architecture and offer developers the flexibility to choose. Flexibility to run workloads on the architecture of your choice is important for organizations for two reasons: better price performance and — a reason that is far downstream from data centers but nevertheless important — laptops and mobile devices. 

The lost art of cloud application engineering

AI-driven coders learn from existing code repositories. They often need a more contextual understanding of the code generated. They produce code that works but may need help to comprehend or maintain. This hinders developers’ control over their software and often causes mistakes when fixing or changing applications. Moreover, the generated code must meet style conventions or best practices and include appropriate error handling. This can make debugging, maintenance, and collaboration difficult. Remember that AI-driven code generation focuses on learning from existing code patterns to generate net-new code. Generative AI coders have a “monkey see, monkey do” approach to development, whereas the coding approaches are learned from the vast amount of code used as training data. This approach is helpful for repetitive or standard tasks, which is much of what developers do, but enterprises may require more creativity and innovation for complex or unique problems.

Beyond generative AI, data quality a data management trend

"We have seen a belated -- but much required -- resurgence in the interest in data quality and supporting capabilities," Ghai said. "Generative AI may be one of the factors. But also organizations are realizing that their data is the only sustainable moat, and without data quality, this moat is not that strong." Tied in with data quality and observability, which fall under the purview of data governance, Ghai added that data privacy continues to gain importance. "There's a growing focus on building analytics with strong privacy protection via techniques like confidential computing and differential privacy," he said. ... Like Ghai, Petrie noted that many organizations are prioritizing data quality -- and overall data governance -- as their data volume grows and fuels new use cases, including generative AI. "AI creates exciting possibilities," he said. "But without accurate and governed data, it will not generate the expected business results. I believe this realization will prompt companies to renew their investments in product categories such as data observability, master data management and data governance platforms."

What Exactly Does a Head of Enterprise Design Do?

Where the head of the enterprise design sits in the organizational hierarchy depends on various factors, Heiligenthal said. Company size and maturity play a role, but typically, she said, it starts in marketing and lands within product, digital and technology groups. “Organizations can be matrixed or hierarchically structured, and success depends on visibility within the organization. The further the design organization is buried, the harder it is to receive buy-in and get work done,” she explained. A struggle that many enterprises come up against, she said, is how and where the design organization sits. “Is it centralized, decentralized or embedded within products?” Each setup has its own benefits and drawbacks, but what’s worked for her is a centralized partnership; one with a distinct, centralized design team that shares a sense of purpose with various product and business teams. “It’s a hub-and-spoke model that centralizes design teams (the hub) and gives us freedom to execute across the organization (the spokes) based on north-star alignment across the business.”

How Intelligent Automation is Driving Positive Change in the Public Sector

When implemented strategically and with the needs of an organisation at the forefront, intelligent automation can support collaboration, communication and new, modern ways of working, not only contributing to increased efficiency and cost-effectiveness, but also boosting employee experience. In particular, the elimination of repetitive, manual and time-intensive administrative work – which is often cited as one of the main factors of public sector job dissatisfaction – can help with retaining key talent within the sector. Along with meeting employee expectations, strategic deployment of automation also helps public bodies to keep pace with the expectations of the general public. Automation allows citizens to access support more effectively, without the need for human intervention, especially in the case of adult social services, being resource-intensive centres that manage access to help and support across many key areas such as pensions, healthcare and benefits. 

Ethical Debt: Why ESG Matters As Much As Technical Debt

Ethical debt is the combined consequences of a lack of optimization in your organization's environmental, social, and governance (ESG) capabilities. As pressure on organizations to meet ethical standards intensifies, managing ethical debt is becoming essential. ... Just like technical debt, the consequences of ethical debt aren't just financial. Your debt getting out of control can impact your reputation, relationship with regulators, talent retention, and partnerships. Another key consideration of the concept of ethical debt, however, is possible exploitation. When you're creating a product, you now need to consider what the consequences could be if it is misused by your customers. Ethical debt isn't just a theoretical concept. ... Environmental, social, and governance (ESG) ethical debt is much like technical debt. Every organization has a certain amount and getting it down to zero is likely not worth your investment, but your ethical debt must be reduced to a reasonable level or there will be consequences for your organization.

Cloud Computing Services: Revolutionizing Data Management in Telecommunications

In addition to cost savings and scalability, cloud computing services also provide robust data security. With cyber threats becoming increasingly sophisticated, data security is a top priority for telecommunications companies. Cloud service providers employ advanced security measures, including encryption and multi-factor authentication, to protect data from unauthorized access. Furthermore, data stored in the cloud is typically backed up in multiple locations, ensuring its availability even in the event of a disaster. The integration of cloud computing services in telecommunications also facilitates improved collaboration and productivity. With data stored in the cloud, employees can access it from anywhere, at any time, using any device with an internet connection. This promotes a more flexible and efficient work environment, enabling teams to collaborate effectively regardless of their geographical location. The adoption of cloud computing services in telecommunications is also driving innovation. 

Quote for the day:

"Leaders are the ones who keep faith with the past, keep step with the present, and keep the promise to posterity." -- Harold J. Seymour

Daily Tech Digest - July 28, 2023

Cyber criminals pivot away from ransomware encryption

“Data theft extortion is not a new phenomenon, but the number of incidents this quarter suggests that financially motivated threat actors are increasingly seeing this as a viable means of receiving a final payout,” wrote report author Nicole Hoffman. “Carrying out ransomware attacks is likely becoming more challenging due to global law enforcement and industry disruption efforts, as well as the implementation of defences such as increased behavioural detection capabilities and endpoint detection and response (EDR) solutions,” she said. In the case of Clop’s attacks, Hoffman observed that it was “highly unusual” for a ransomware group to so consistently exploit zero-days given the sheer time, effort and resourcing needed to develop exploits. She suggested this meant that Clop likely has a level of sophistication and funding that is matched only by state-backed advanced persistent threat actors. Given Clop’s incorporation of zero-days in MFT products into its playbook, and its rampant success in doing so 

Get the best value from your data by reducing risk and building trust

Data risk is potentially detrimental to the business due to data mismanagement, inadequate data governance, and poor data security. Data risk that isn’t recognized and mitigated can often result in a costly security breach. To improve security posture, enterprises need to have an effective strategy for managing data, ensure data protection is compliant with regulations and look for solutions that provide access controls, end-to-end encryption, and zero-trust access, for example. Assessing data risk is not a tick-box exercise. The attack landscape is constantly changing, and enterprises must assess their data risk regularly to evaluate their security and privacy best practices. Data subject access requests are when an individual submits an inquiry asking how their personal data is harvested, stored, and used. It is a requirement of several data privacy regulations, including GDPR. It is recommended that enterprises automate these data subject requests to make them easier to track, preserve data integrity, and are handled swiftly to avoid penalties.

Why Developers Need Their Own Observability

The goal of operators’ and site reliability engineers’ observability efforts are straightforward: Aggregate logs and other telemetry, detect threats, monitor application and infrastructure performance, detect anomalies in behavior, prioritize those anomalies, identify their root causes and route discovered problems to their underlying owner. Basically, operators want to keep everything up and running — an important goal but not one that developers may share. Developers require observability as well, but for different reasons. Today’s developers are responsible for the success of the code they deploy. As a result, they need ongoing visibility into how the code they’re working on will behave in production. Unlike operations-focused observability tooling, developer-focused observability focuses on issues that matter to developers, like document object model (DOM) events, API behavior, detecting bad code patterns and smells, identifying problematic lines of code and test coverage. Observability, therefore, means something different to developers than operators, because developers want to look at application telemetry data in different ways to help them solve code-related problems.

Understanding the value of holistic data management

Data holds valuable insights into customer behaviour, preferences and needs. Holistic management of data enables organisations to consolidate and analyse their customers’ data from multiple sources, leading to a comprehensive understanding of their target audience. This knowledge allows companies to tailor their products, services and marketing efforts to better meet customer expectations, which can result in improved customer satisfaction and loyalty. Organisations can in some on-market tools draw relationships between their customers to see the physical relationships. Establishing customer relationships can be very beneficial, especially for target marketing. To demonstrate this point, for example, an e-mail arrives in your inbox shortly before your anniversary date, suggesting a specifically tailor-made gift for your partner. It is extremely important for an organisation to have a competitive-edge and to stay relevant. Data that is not holistically managed will slow down the organisation's ability to make timely and informed decisions, hindering its ability to respond quickly to changing market dynamics and stay ahead of its competitors.

Why Today's CISOs Must Embrace Change

While this is a long-standing challenge, I've seen the tide turn over the past four or five years, especially when COVID happened. Just the nature of the event necessitated dramatic change in organizations. During the pandemic, CISOs who said "no, no, no," lost their place in the organization, while those who said yes and embraced change were elevated. Today we're hitting an inflection point where organizations that embrace change will outpace the organizations that don't. Organizations that don't will become the low-hanging fruit for attackers. We need to adopt new tools and technologies while, at the same time, we help guide the business across the fast-evolving threat landscape. Speaking of new technologies, I heard someone say AI and tools won't replace humans, but the humans that leverage those tools will replace those that don't. I really like that — these tools become the "Iron Man" suit for all the folks out there who are trying to defend organizations proactively and reactively. Leveraging all those tools in combination with great intelligence, I think, enables organizations to outpace the organizations that are moving more slowly and many adversaries.

Navigating Digital Transformation While Cultivating a Security Culture

When it comes to security and digital transformation, one of the first things that comes to mind for Reynolds is the tech surface. “As you evolve and transition from legacy to new, both stay parallel running, right? Being able to manage the old but also integrate the new, but with new also comes more complexity, more security rules,” he says. “A good example is cloud security. While it’s great for onboarding and just getting stuff up and running, they do have this concept of shared security where they manage infrastructure, they manage the storage, but really, the IAM, the access management, the network configuration, and ingress and egress traffic from the network are still your responsibility. And as you evolve to that and add more and more cloud providers, more integrations, it becomes much more complex.” “There’s also more data transference, so there are a lot of data privacy and compliance requirements there, especially as the world evolves with GDPR, which everyone hopefully by now knows.

Breach Roundup: Zenbleed Flaw Exposes AMD Ryzen CPUs

A critical vulnerability affecting AMD's Zen 2 processors, including popular CPUs such as the Ryzen 5 3600, was uncovered by Google security researcher Tavis Ormandy. Dubbed Zenbleed, the flaw allows attackers to steal sensitive data such as passwords and encryption keys without requiring physical access to the computer. Tracked as CVE-2023-20593, the vulnerability can be exploited remotely, making it a serious concern for cloud-hosted services. The vulnerability affects the entire Zen 2 product range, including AMD Ryzen and Ryzen Pro 3000/4000/5000/7020 series, and the EPYC "Rome" data center processors. Data can be transferred at a rate of 30 kilobits per core, per second, allowing information extraction from various software running on the system, including virtual machines and containers. Zenbleed operates without any special system calls or privileges, making detection challenging. While AMD released a microcode patch for second-generation Epyc 7002 processors, other CPU lines will have to wait until at least October 2023. 

The Role of Digital Twins in Unlocking the Cloud's Potential

A DT, in essence, is a high-fidelity virtual model designed to mirror an aspect of a physical entity accurately. Let’s imagine a piece of complex machinery in a factory. This machine is equipped with numerous sensors, each collecting data related to critical areas of functionality from temperature to mechanical stress, speed, and more. This vast array of data is then transmitted to the machine’s digital counterpart. With this rich set of data, the DT becomes more than just a static replica. It evolves into a dynamic model that can simulate the machinery’s operation under various conditions, study performance issues, and even suggest potential improvements. The ultimate goal of these simulations and studies is to generate valuable insights that can be applied to the original physical entity, enhancing its performance and longevity. The resulting architecture is a dual Cyber-Physical System with a constant flow of data that brings unique insights into the physical realm from the digital realm.

The power of process mining in Power Automate

Having tools that identify and optimize processes is an important foundation for any form of process automation, especially as we often must rely on manual walkthroughs. We need to be able to see how information and documents flow through a business in order to be able to identify places where systems can be improved. Maybe there’s an unnecessary approval step between data going into line-of-business applications and then being booked into a CRM tool, where it sits for several days. Modern process mining tools take advantage of the fact that much of the data in our businesses is already labeled. It’s tied to database tables or sourced from the line-of-business applications we have chosen to use as systems of record. We can use these systems to identify the data associated with, say, a contract, and where it needs to be used, as well as who needs to use it. With that data we can then identify the process flows associated with it, using performance indicators to identify inefficiencies, as well as where we can automate manual processes—for example, by surfacing approvals as adaptive cards in Microsoft Teams or in Outlook.

Data Program Disasters: Unveiling the Common Pitfalls

In the realm of data management, it’s tempting to be swayed by the enticing promises of new tools that offer lineage, provenance, cataloguing, observability, and more. However, beneath the glossy marketing exterior lies the lurking devil of hidden costs that can burn a hole in your wallet. Let’s consider an example: while you may have successfully negotiated a reduction in compute costs, you might have overlooked the expenses associated with data egress. This oversight could lead to long-term vendor lock-in or force you to spend the hard-earned savings secured through skilful negotiation on the data outflow. This is just one instance among many; there are live examples where organizations have chosen tools solely based on their features and figured lately that such tools needed to fully comply with the industry’s regulations or the country they operate in. In such cases, you’re left with two options: either wait for the vendor to become compliant, severely stifling your Go-To-Market strategy or supplement your setup with additional services, effectively negating your cost-saving efforts and bloating your architecture.

Quote for the day:

"It's very important in a leadership role not to place your ego at the foreground and not to judge everything in relationship to how your ego is fed." -- Ruth J. Simmons

Daily Tech Digest - July 27, 2023

'FraudGPT' Malicious Chatbot Now for Sale on Dark Web

Both WormGPT and FraudGPT can help attackers use AI to their advantage when crafting phishing campaigns, generating messages aimed at pressuring victims into falling for business email compromise (BEC), and other email-based scams, for starters. FraudGPT also can help threat actors do a slew of other bad things, such as: writing malicious code; creating undetectable malware; finding non-VBV bins; creating phishing pages; building hacking tools; finding hacking groups, sites, and markets; writing scam pages and letters; finding leaks and vulnerabilities; and learning to code or hack. Even so, it does appear that helping attackers create convincing phishing campaigns is still one of the main use cases for a tool like FraudGPT, according to Netenrich. ... As phishing remains one of the primary ways that cyberattackers gain initial entry onto an enterprise system to conduct further malicious activity, it's essential to implement conventional security protections against it. These defenses can still detect AI-enabled phishing, and, more importantly, subsequent actions by the threat actor.

Key factors for effective security automation

A few factors generally drive the willingness to automate security. One factor is if the risk of not automating exceeds the risk of an automation going wrong: If you conduct business in a high-risk environment, the potential for damage when not automating can be higher than the risk of triggering an automated response based on a false positive. Financial fraud is a good example, where banks routinely automatically block transactions they find to be suspicious, because a manual process would be too slow. Another factor is when the damage potential of an automation going wrong is low. For example, there is no potential damage when trying to fetch a non-existent file from a remote system for forensic analysis. But what really matters most is how reliable automation is. For example, many threats actors today use living-off-the-land techniques, such as using common and benign system utilities like PowerShell. From a detection perspective, there are no uniquely identifiable characteristics like a file hash, or a malicious binary to inspect in a sandbox. 

API-First Development: Architecting Applications with Intention

More traditionally, tech companies often started with a particular user experience in mind when setting out to develop a product. The API was then developed in a more or less reactive way to transfer all the necessary data required to power that experience. While this approach gets you out the door fast, it isn’t very long before you probably need to go back inside and rethink things. Without an API-first approach, you feel like you’re moving really fast, but it’s possible that you’re just running from the front door to your driveway and back again without even starting the car. API-first development flips this paradigm by treating the API as the foundation for the entire software system. Let’s face it, you are probably going to want to power more than one developer, maybe even several different teams, all possibly even working on multiple applications, and maybe there will even be an unknown number of third-party developers. Under these fast-paced and highly distributed conditions, your API cannot be an afterthought.

What We Can Learn from Australia’s 2023-2030 Cybersecurity Strategy

One of the challenges facing enterprises in Australia today is a lack of clarity in terms of cybersecurity obligations, both from an operational perspective and as organizational directors. Though there are a range of implicit cybersecurity obligations designated to Australian enterprises and nongovernment entities, it is the need of the hour to have more explicitly stated obligations to increase national cyberresilience.There are also opportunities to simplify and streamline existing regulatory frameworks to ensure easy adoption of those frameworks and cybersecurity obligations. ... Another important aspect of the upcoming Australian Cybersecurity Strategy is to strengthen international cyberleaders to enable them to seize opportunities and address challenges presented by the shifting cyberenvironment. To keep up with new and emerging technologies, this cybersecurity strategy aims to take tangible steps to shape global thinking about cybersecurity.

Is your data center ready for generative AI?

Generative AI applications create significant demand for computing power in two phases: training the large language models (LLMs) that form the core of generate AI systems, and then operating the application with these trained LLMs, says Raul Martynek, CEO of data center operator DataBank. “Training the LLMs requires dense computing in the form of neural networks, where billions of language or image examples are fed into a system of neural networks and repeatedly refined until the system ‘recognizes’ them as well as a human being would,” Martynek says. Neural networks require tremendously dense high-performance computing (HPC) clusters of GPU processors running continuously for months, or even years at a time, Martynek says. “They are more efficiently run on dedicated infrastructure that can be located close to the proprietary data sets used for training,” he says. The second phase is the “inference process” or the use of these applications to actually make inquiries and return data results.

Siloed data: the mountain of lost potential

Given AI’s growing capabilities for handling customer service are only made possible through data, the risk of not breaking down internal data siloes is sizeable, not just in terms of missing opportunities. Companies could also see a decline in the speed and quality of their customer service as contact centre agents need to spend longer navigating multiple platforms and dashboards to find the information needed to help answer customers’ queries. Eliminating data siloes requires educating everyone in the business to understand the necessity of sharing data through an open culture and encouraging the data sides of operations to co-ordinate efforts, align visions and achieve goals. The synchronisation of business operations with customer experience, alongside adopting a data-driven approach, can produce significant benefits such as increased customer spending. ... Data, working for and with AI, must be placed at the centre of the business model. This means getting board buy-in to establish a data factory run by qualified data engineers and analysts who are capable of driving the collection and use of data within the organisation.

An Overview of Data Governance Frameworks

Data governance frameworks are built on four key pillars that ensure the effective management and use of data across an organization. These pillars ensure data is accurate, can be effectively combined from different sources, is protected and used in compliance with laws and regulations, and is stored and managed in a way that meets the needs of the organization. ... Furthermore, a lack of governance can lead to confusion and duplication of effort, as different departments or individual users try to manage data with their own methods. A well-designed data governance framework ensures all users understand the rules for managing data and that there is a clear process for making changes or additions to the data. It unifies teams, improving communication between different teams and allowing different departments to share best practices. In addition, a data governance framework ensures compliance with laws and regulations. From HIPAA to GDPR, there are a multitude of data privacy laws and regulations all over the world. Running afoul of these legal provisions is expensive in terms of fines and settlement costs and can damage an organization’s reputation.

Governance — the unsung hero of ESG

What's interesting is that for the most part, they're all at different stages of transformation and managing the risks of transformation. A board has four responsibilities, observing performance, approving, and providing resources to fund the strategy, hiring and developing the succession plan, and risk management. Depending on where you are in a normal cycle of a business or the market, the board is involved in these 4. Also, I take lessons that I've learned at other boards and apply them possibly to Baker Hughes' situation and vice versa: take some of the lessons that I'm learning and the things that I'm hearing in the Baker Hughes situation — unattributed, of course — and bring it into other boards. Sometimes there's a nice element of sharing. As you know, Baker Hughes has a very strong Board and I am a good student at taking down good and thoughtful questions from board members and bringing that to other company boards, if appropriate.

Why whistleblowers in cybersecurity are important and need support

“Governments should have a whistleblower program with clear instructions on how to disclose information, then offer the resources to enable procedures to encourage employees to come forward and guarantee a safe reporting environment,” she says. Secondly, nations need to upgrade their legislation to include strong anti-retaliation protection against tech workers, making it unlawful for various entities to engage in reprisal. This includes job-related pressure, harassment, doxing, blacklisting, and retaliatory investigations. ... To further increase chances, employees can be offered regular training sessions in which they are informed about the importance of coming forward on cybersecurity issues, the ways to report wrongdoing, and the protection mechanisms they could access. Moreover, leadership should explain that it has zero tolerance for retaliation. “Swift action should be taken if any instances of retaliation come to light,” according to Empower Oversight. The message leadership should convey is that issues are taken seriously and that C-level executives are open for conversation if the situation requires such an action.

Cloud Optimization: Practical Steps to Lower Your Bills

Optimization is always an iterative process, requiring continual adjustment as time goes on. However, there are many quick wins and strategies that you can implement today to refine your cloud footprint:Unused virtual machines (VMs), storage and bandwidth can lead to unnecessary expenses. Conducting periodic evaluations of your cloud usage and identifying such underutilized resources can effectively minimize costs. Check your cloud console now. You might just find a couple of VMs sitting there idle, accidentally left behind after the work was done. Temporary backup resources, such as VMs and storage, are frequently used for storing data and application backups. Automate the deletion process of these temporary backup resources to save money. Selecting the appropriate tier entails choosing the cloud resource that aligns best with your requirements. For instance, if you anticipate a high volume of traffic and demand, opting for a high-end VM would be suitable. Conversely, for smaller projects, a lower-end VM might suffice. 

Quote for the day:

"Your job gives you authority. Your behavior gives you respect." -- Irwin Federman

Daily Tech Digest - July 26, 2023

How digital humans can make healthcare technology more patient-centric

Like humans, digital humans have anatomy. Several technologies are used to create digital humans. The Representation: The “face” of the digital entity can be created in likeness to a real or caricature of a human. The quality of this representation is critical to a successfully designed digital human. Natural Language Processing (NLP) or Natural Language Understanding (NLU): NLP/NLU ensures that the digital human can properly interpret information, such as speech detection, speech-to-text translation, and language recognition and detection. Advanced forms of NLP/NLU will include sign language as well. Cognitive Services: Cognitive services are used for creating personalized communication including language translation, speech synthesis, voice customization, speech prosody and pitch, nomenclature and specialized pronunciation. Artificial Intelligence: The AI layer–whether generative, extractive or other forms–provides contextual conversation response, context recognition and for generative AI, content creation.

CISO to BISO – What's your next role?

The role of a BISO has emerged over the past decade, as organisations recognise the need for dedicated security roles and skills within specific business units or departments. While it is challenging to pinpoint an exact date when the role of BISO became established across all industries, it can be traced back to the increasing emphasis on information security, the evolving nature of cybersecurity threats and the increasingly complex technical infrastructures in use. As businesses have become more digital, data-centric, and interconnected, the complexity and diversity of security risks have grown exponentially with it. Traditional approaches to information security, where the responsibility solely resides with the IT department or a centralised security team, have proved inadequate to address the unique security challenges faced by businesses today. ... When implementing information security in larger organisations, we would look for security champions within operational or support functions. People who showed some kind of interest in the world of cybersecurity usually resulted in them being offered a support role on a voluntary basis. 

Top cybersecurity tools aimed at protecting executives

A recent Ponemon report, sponsored by BlackCloak, revealed that 42% of respondents indicated that key executives and family members have already experienced at least one cyberattack. While it's likely that cybercriminals will target executives and the digital assets they have access to, organizations are not responding with suitable strategies, budgets, and staff, the report found. Just over half (58%) of respondents reported that the prevention of threats against executives and their digital assets is not covered in their cyber, IT and physical security strategies and budget. The lack of attention is demonstrated with only 38% of respondents reporting a dedicated team to prevent or respond to cyber or privacy attacks against executives and their families. The best practice to do this well would be to protect the executive as well as their family, inner circle, and associates with a broad range of measures, Agency's Executive Digital Protection report noted. The solutions need to balance breadth, value, privacy, and specialization, it said. 

How WebAssembly will transform edge computing

As the next major technical abstraction, Wasm aspires to address the common complexity inherent in the management of the day-to-day dependencies embedded into every application. It addresses the cost of operating applications that are distributed horizontally, across clouds and edges, to meet stringent performance and reliability requirements. Wasm’s tiny size and secure sandbox mean it can be safely executed everywhere. With a cold start time in the range of 5 to 50 microseconds, Wasm effectively solves the cold start problem. It is both compatible with Kubernetes while not being dependent upon it. Its diminutive size means it can be scaled to a significantly higher density than containers and, in many cases, it can even be performantly executed on demand with each invocation. But just how much smaller is a Wasm module compared to a micro-K8s containerized application? An optimized Wasm module is typically around 20 KB to 30 KB in size. When compared to a Kubernetes container, the Wasm compute units we want to distribute are several orders of magnitude smaller. 

Data Governance Trends and Best Practices for Storage Environments

The more intelligent the data layer is, the more value the data can provide. More valuable data makes the role of data governance stronger within the organization. Active archive solutions can serve as a framework for data governance by including an intelligent data management software layer that automatically places data where it belongs and optimizes its location based on cost, performance, and user access needs. “Data governance is the process of managing the availability, usability, integrity and security of enterprise data,” said Rich Gadomski, head of tape evangelism at FUJIFILM Recording Media U.S.A. and co-chair of the Active Archive Alliance. ... Supporting active archives with optical disk storage technologies can provide long-term data preservation. These technologies are designed to withstand environmental factors like temperature, humidity, and magnetic interference, ensuring the integrity and longevity of archived data. With a typical lifespan of hundreds of years or more, optical disks are well-suited for archival purposes.

Dr. Pankaj Setia on the challenges that will redefine CIOs’ careers

First, a risk-averse culture may be addressed through a two-pronged approach. First, CIOs must champion training and engagement of employees, to create a digital mindset and enhance understanding of the digital transformation being undertaken. It is imperative that the employees are excited about the transformation. ... A second step for CIOs is to work toward getting buy-in from top management. For CIOs to get desired results, the board and top management team (TMT) must actively champion digital transformation initiatives. Many examples from the corporate world underline the role of top leadership in engaging and motivating employee teams. Second, overcoming the barriers due to siloed strategy is a complex endeavor. It is not always easy to overcome these, as professional management relies on specialization in a functional domain (e.g., marketing, finance, human resources, etc.). However, because digital transformation inherently spans functional domains, siloed strategies — that emphasize super specialization — are not optimal. Therefore, CIOs should look to create cross-functional teams.

Risks and Strategies to Use Generative AI in Software Development

Among the risks of using AI in software development is the potential that it regurgitates bad code that has been making the rounds in the open-source world. “There’s bad code is being copied and used everywhere,” says Muddu Sudhakar, CEO and co-founder of Aisera, developer of a generative AI platform for enterprise. “That’s a big risk.” The risk is not simply poorly written code being repeated -- the bad code might be put into play by bad actors looking to introduce vulnerabilities they may exploit at a later date. Sudhakar says organizations that draw upon generative AI, and other open-source resources, should put controls in place to spot such risks if they intend to make AI part of the development equation. “It’s in their interest because all it takes is one bad code,” he says, pointing to the long-running hacking campaign behind the Solar Winds data breach. The skyrocketing appeal of AI for development seems to outweigh concerns about the potential for data to leak or for other issues to occur. “It’s so useful that it’s worth actually being aware of the risks and doing it anyway,” says Babak Hodjat, CTO of AI and head of Cognizant AI Labs.

Supply Chain, Open Source Pose Major Challenge to AI Systems

Bengio said one big risk area around AI systems is open-source technology, which "opens the door" to bad actors. Adversaries can take advantage of open-source technology without huge amounts of compute or strong expertise in cybersecurity, according to Bengio. He urged the federal government to establish a definition of what constitutes open-source technology - even if it changes over time - and use it to ensure future open-source releases for AI systems are vetted for potential misuse before being deployed. "Open source is great for scientific progress," Bengio said. "But if nuclear bombs were software, would you allow open-source nuclear bombs?" Bengio said the United States must ensure that spending on AI safety is equivalent to how much the private sector is spending on new AI capabilities, either through incentives to businesses or direct investment in nonprofit organizations. The safety investments should address the hardware used in AI systems as well as cybersecurity controls necessary to safeguard the software that powers AI systems.

Zero-Day Vulnerabilities Discovered in Global Emergency Services Communications Protocol

In a demonstration video of CVE-2022-24401, researchers showed that an attacker would be able to capture the encrypted message by targeting a radio to which the message was being sent. Midnight Blue founding partner Wouter Bokslag says that in none of the circumstances for this vulnerability do you get your hands on a key: "The only thing is you're getting is the key stream, which you can use to decrypt, arbitrary frames, or arbitrary messages that go over the network." A second demonstration video of CVE-2022-24402 reveals that there is a backdoor in the TEA1 algorithm that affects networks relying on TEA1 for confidentiality and integrity. It was also discovered that the TEA1 algorithm uses an 80-bit key that an attacker could do a brute-force attack on, and listen in to the communications undetected. Bokslag admits that using the term backdoor is strong, but it is justified in this instance. "As you feed an 80 bits key to TEA1, that flows through a reduction step and which leaves it with only 32 bits of key material, and it will carry on doing the decryption with only those 32 bits," he says.

Enterprises should layer-up security to avoid legal repercussions

There are two competing temptations in the technology landscape that the seasoned security professional must navigate. The first is the temptation to totally trust the power of the tool. An overly optimistic reliance on vendor tools and promises can fail to identify security issues if the tools are not properly implemented and operationalized in your environment. A shiny SIEM tool, for example, is useless unless you have clearly documented response actions to take for each alert, as well as fully trained personnel to handle investigations. The second temptation, which I believe is more prevalent within tech and SaaS companies, is to trust no tool except for in-house tech. The thought process goes as follows: “Since we have a solid development team, and we want to keep a bench of developers for any eventuality, we need to keep their skills sharp, so we might as well build our own tools.” It’s a sound argument — up to a point. However, it may be a bit arrogant to believe your company has the expertise to develop the best-in-class SIEM solutions, ticketing systems, SAST tools, and what have you.

Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller