Daily Tech Digest - August 20, 2023

Central Bank Digital Currency (CBDC) and blockchain enable the future of payments

CBDC has the potential to transform the future of payments. It can be used to create programmable money that can be spent only on specific things. For example, a government could issue a stimulus package that can only be spent on certain goods and services. This would ensure that the money is spent in the intended manner and would reduce the risk of fraud. Also, CBDC can improve financial inclusion. According to the World Bank, around 1.7 billion people do not have access to basic financial services. CBDC can solve this problem by providing a digital currency that anyone with a smartphone can use, without the need for a bank account. When a CBDC holder uses their phone as a medium for transactions, it becomes crucial to establish a strong link between their digital identity and the device they are using. This link is essential to ensure that the right party is involved in the transaction, mitigating the risk of fraud and promoting trust in the digital financial ecosystem. That said, CBDC and the digital identity can work together to improve financial inclusion.


A statistical examination of utilization trends in decentralized applications

Decentralized applications (dApp) have proliferated in recent years, but their long-term viability is a topic of debate. However, for dApps to be sustainable, and suitable for integration into a larger service networks, they need to attract users and promise reliable availability. Therefore, assessing their longevity is crucial. Analyzing the utilization trajectory of a service is, however, challenging due to several factors, such as demand spikes, noise, autocorrelation, and non-stationarity. In this study, we employ robust statistical techniques to identify trends in currently popular dApps. Our findings demonstrate that a significant proportion of dApps, across a range of categories, exhibit statistically significant positive overall trends, indicating that success in decentralized computing can be sustainable and transcends specific fields. However, there is also a substantial number of dApps showing negative trends, with a disproportionately high number from the decentralized finance (DeFi) category. 


How SaaS Companies Can Monetize Generative AI

Rather than building these models from scratch, many companies elect to leverage OpenAI’s APIs to call GPT-4 (or other models), and serve the response back to customers. To obtain complete visibility into usage costs and margins, each API call to and from OpenAI tech should be metered to understand the size of the input and the corresponding backend costs, as well as the output, processing time and other relevant performance metrics. By metering both the customer-facing output and the corresponding backend actions, companies can create a real-time view into business KPIs like margin and costs, as well as technical KPIs like service performance and overall traffic. After creating the meters, deploy them to the solution or application where events are originating to begin tracking real-time usage. Once the metering infrastructure is deployed, begin visualizing usage and costs in real time as usage occurs and customers leverage the generative services. Identify power users and lagging accounts and empower customer-facing teams with contextual data to provide value at every touchpoint.


“Auth” Demystified: Authentication vs Authorization

There are two technical approaches to modern authorization that are growing ecosystems around them: policy-as-code and policy-as-data. They are similar in that both approaches advocate decoupling authorization logic from the application code. But they also have differences. In policy-as-code systems, the authorization policy is written in a domain-specific language, and stored and versioned in its own repository like any other code. OPA is one well-known example of this approach. It is a CNCF graduated project that is mostly used in infrastructure authorization use cases, such as k8s admission control. It provides a great general-purpose decision engine to enforce authorization logic, and a language called Rego to define that logic as policy. The policy-as-data approach determines access based on relationships between users and the underlying application data. Rather than rely on rules in a policy, these systems use the relationships between subjects (users/groups) and objects (resources) in the application. 


Redefining Software Resilience: The Era of Artificial Immune Systems

Artificial Immune Systems, inspired by the vertebrate immune system, provide an innovative approach to designing self-healing software. By emulating the biological immune system’s ability to adapt, learn, and remember, AIS can empower software systems to detect, diagnose, and fix issues autonomously. AIS offers a framework that enables the software to learn from each interaction, adapt to system changes, and remember past faults and their resolutions. AIS leads to a more robust, resilient system capable of tackling an array of unpredictable errors and vulnerabilities. The vertebrate immune system consists of innate immunity and adaptive immunity. Innate immunity protects us against known pathogens. Innate immunity is always non-specific and general. Present self-healing software models closely resemble innate immunity. Adaptive immunity can learn from current threats and apply the knowledge to handle future situations. At its core, these systems mimic the vertebrate immune system’s differentiation of self and non-self entities.


Europe’s Business Software Startups Prove Resilient: Why?

So what are the factors underpinning the resilience of Europe’s business software sector. One key element of the picture is demand from other tech companies. “Europe’s tech ecosystem is maturing, " says Windsor. “And as the sector matures, companies need tools. Those tools are being supplied by business software companies.” And of course, there is demand from companies outside the tech sector. From banking and financial services to manufacturing, digital transformation is continuing across the economy as a whole creating opportunities for new B2B software providers. But how do European companies take advantage of those opportunities in a market that has been dominated by North American rivals? This isn’t captured in the data, but Windsor sees a home market-first approach, widening out to include new countries and territories as businesses grow. “Anecdotally companies start by selling to their domestic market, then they look at the continent. After that, they expand to other regions.” There is, Windsor adds, a preference for the Asia Pacific. The U.S., on the other hand, remains a difficult market.


Open RAN Testing Expands in the US Amid 5G Slowdown

To be clear, open RAN technology in the US has a number of backers. Dish Network is perhaps the most vocal, having built an open RAN-based 5G network across 70% of the US population. Further, other operators have hinted at their own initial open RAN aspirations, including AT&T and Verizon. Interestingly, the US government has also emerged as a leading proponent for open RAN. For example, the US military continues to fund open RAN tests and deployments. And the Biden administration's NTIA is doling out millions of dollars in the pursuit of open RAN momentum. Broadly, US officials hope to use open RAN technologies to encourage the production of 5G equipment domestically and among US allies, as a lever against China. But open RAN continues to face struggles. For example, US-based open RAN vendors like Airspan and Parallel Wireless have hit hurdles recently. And research and consulting firm Dell'Oro recently reported that open RAN revenue growth slowed to the 10 to 20% range in the first quarter, after more than doubling in 2022.


Low-Code and AI: Friends or Foes?

Although it appears likely that AI will replace low-code, there are actually many opportunities for symbiosis between the two concepts. Rather than eradicate low-code platforms entirely, LLMs will likely become more embedded within them. We’ve already seen this occur as low-code providers like Mendix and OutSystems integrated ChatGPT connectors. Microsoft has also embedded ChatGPT into its Power Platform as well as integrated GPT-driven Copilots into various developer environments. “Low-code and AI on their own are powerful tools to increase enterprise efficiency and productivity,” said Dinesh Varadharajan, the chief product officer at Kissflow. “But there is potential for the combination of both to unlock game-changing automation for almost every industry. The power comes from the congruence between low-code/no-code and AI.” There is also the opportunity to train bespoke LLMs on the inner workings of specific software development platforms, which could generate fully-built templates upon natural language prompts. 

Cloud cost optimization should begin by measuring the drivers of cloud spend at a granular level and then providing full visibility to the teams and organizations that are behind the spend, says Tim Potter, principal, technology strategy and cloud engineering with Deloitte Consulting. “Near-real-time dashboards showing cloud resource utilization, routine reports of cloud consumption, and predictive spend reports will provide application teams and business units with the data needed to take action to optimize cloud costs,” he notes. ... Rearchitecting applications is a frequently overlooked way to achieve the cost and other benefits of transitioning to a cloud model. “Organizations also need to understand the various discount models and select one that optimizes costs yet also provides flexibility and predictability into spending,” says Mindy Cancila, vice president of corporate strategy for Dell Technologies. Cancila adds that organizations should not only consider current workload costs, but also how to manage costs for workloads as they scale over time.


Warning: Attackers Abusing Legitimate Internet Services

Cloud storage platforms, and Google Cloud in particular, are the most exploited, followed by messaging services - most often Telegram, including via its API - as well as email services and social media, the researchers found. Examples of other services being abused by attackers include OneDrive, Discord, Gmail SMTP, Mastodon profiles, GitHub, bitcoin blockchain data, the project management tool Notion, malware analysis site VirusTotal, YouTube comments and even Rotten Tomatoes movie review site profiles. "It is important to note that ransomware campaigns use legitimate cloud storage tools such as mega.io or MegaSync for exfiltration purposes as well," although the crypto-locking malware itself may not be coded to work directly with legitimate tools, the report says. Criminals' choice of service depends on desired functionality. Anyone using an info stealer such as Vidar needs a place to store large amounts of exfiltrated data. The researchers said cloud services' easy setup for less technically sophisticated users makes them a natural fit for such use cases.



Quote for the day:

"We're all passionate about something, the secret is to figure out what it is, then pursue it with all our hearts" -- Gordon Tredgold

Daily Tech Digest - August 19, 2023

Inside the Rise of 'Dark' AI Tools - Scary, But Effective?

This shouldn't be surprising, since building LLMs is an intensive endeavor. "As what WormGPT showed, even with a dedicated team of people, it would take months to develop just one customized language model," Sancho and Ciancaglini said in the report. Once a product launched, service providers would need to fund not just ongoing refinements but also the cloud computing power required to support users' queries. Another challenge for would-be malicious chatbot developers is that widely available legitimate tools can already be put to illicit use. Underground forums abound with posts from users detailing fresh "jailbreaks" for the likes of ChatGPT, designed to evade providers' restrictions, which are designed to prevent the tool from responding to queries about unethical or illegal topics. In his WormGPT signoff earlier this month, Last made the same point, noting that his service was "nothing more than an unrestricted ChatGPT," and that "anyone on the internet can employ a well-known jailbreak technique and achieve the same, if not better, results by using jailbroken versions of ChatGPT."


4 ways simulation training alleviates team burnout

Simulation training boosts confidence because unlike traditional training methods, the learner gains experience over time through true-to-life virtual cyber warfare training and sparring against simulated malicious adversaries that behave like human opponents. By training in the same IT infrastructure they have at their job— complete with networks, servers, and security tools—they improve competencies, judgment skills, and gain “muscle memory” so they feel prepared to respond to a real cyber incident. ... With simulation training, SOC teams learn to identify false positives and high-priority alerts more effectively over time as they become familiar with the types of alerts that end up impacting their organization’s infrastructure. The training can mimic the high volume of alerts they receive during the day and help teams develop effective triage strategies to streamline their response processes. Practicing this in simulation allows teams to experiment on their approach and fine-tune it without fear of making a mistake during operating hours.


A managerial mantra in the age of artificial intelligence

The rise of modern management brought forth professionalism through business schools, advocating ethical standards and fostering professional workplaces globally. Often, this professionalism is rooted in the mastery of managerial principles. These principles are created and taught by a variety of business school professors, and they are developed in close collaboration with executives and leaders. Unfortunately, a lot of these ideas have only been applied sparingly due to practical limitations. These limitations may result from the limited time available for decision-making in the corporate world, the need to manage uncertainties, the lack of data and accurate knowledge of the facts, and occasionally even the ignorance of professional principles. ... Organisational thinkers have traditionally identified that this leads to satisfaction, whereby managers have to be satisfied with the good-enough, not necessarily the best, choice. In other words, constraints on time availability lead a manager to do a limited analysis of the impact of a job candidate on future organisational performance. 


Five Challenges in Implementing AI in Automation

Accuracy and bias are two critical, yet recurring issues in AI that require human supervision. For example, generative AI applications are prone to hallucination, or making up facts based on their training dataset. In the same vein, biased datasets fed into a machine learning model can produce biased results. If a financial services firm is using an AI-driven automated system to accept or reject credit applications, for example, it’s essential to avoid well-documented, systemic biases toward women or people of color that may be contained in the training dataset. As we progress toward AI-driven decision-making, it’s critical for humans to remain in the loop, verifying the results generated by machine learning algorithms to check bias and other forms of inaccuracy. Keeping humans in the loop is a critical step toward re-training algorithms to perform more effectively in a production environment. ... Regulating AI is an ongoing issue globally, and the legal field continues to be shaped by emerging technologies including generative AI. 


Mastering Agile Security: Safeguarding Your Projects in a Fast-Paced World

Just ensuring rapid delivery of the product is not enough. The key to Agile success is to ensure that security is an integral part of the process from the beginning. And since agile is an iterative process, and is all about accommodating changing requirements as and when they arise, security must also be part of this iterative process. Regular security reviews and tests whenever there is a change in the product is the key to delivering a working as well as secure product. ... Agile security is not an impediment to the Agile process; rather, it's an essential component that ensures the final product is robust, resilient, and safeguarded against potential threats. It's not about slowing down development but about integrating security seamlessly into every phase of the project lifecycle. ... At the core of Agile security is the Agile mindset. This mindset emphasizes collaboration, adaptability, and constant improvement. Security is not a one-time event but an ongoing effort that requires the entire team's commitment.


Managing Software Development Team Dynamics from Within

In most cases, the whole team will benefit from trying new tools or services every now and then, just to understand patterns and trends. We know we should always be increasing automation. However, especially with things like JavaScript frameworks, up jumps the New Pusher — too keen to adopt the new when no evidence exists that the gains are worth the disruption cost. Or worse, ignoring the disruption cost entirely. The New Pusher can make the team pine for the road not taken, as opposed to do what they should do, and investigate a little on their own time to see how the team will truly benefit from their shiny find. When thinking about adopting a new tool or service the team should not trial it somewhere inconsequential, as that will be neither conclusive nor beneficial. A short examination or study period should lead to a yes/no decision and the use of the tool or platform somewhere of value. Once the pattern is set, the New Pusher can work to that template. The suspicion that people just want to put new experiences on their CV is a little irrelevant. 


How Generative AI Is Making Data Catalogs Smarter

Sequeda explained how generative AI, which leverages conversational, chat-oriented interfaces to surface results from large language models (LLMs), improves productivity and encourages the adoption of a data catalog. With more traditional data catalogs, administrative tasks require more significant manual interventions, time, and some advanced skills and analysis. Smart catalogs remove these barriers by simplifying and automating some of the administrative workflows. As a result, team members in an organization see faster time to value and find it easier to get started with the catalogs. On the data producers’ end, Sequeda said, “Generative AI automatically enriches metadata around the inputs and provides descriptions and synonyms” in the data catalog, smoothing catalog record creation and upkeep. Also, smart data catalogs give data engineers “code summaries” about catalog queries, reducing the time to do DataOps, including any pipeline malfunctions. Using smart data catalogs, consumers find inspiration when the generative AI suggests alternative queries from previous searches and patterns of results. 


Four Myths About Digital Transformation And How To Debunk Them By Modernizing At The Data Layer

A data fabric architecture is essentially a data mesh with an added “abstraction layer” that virtualizes all data into a centralized platform. The benefit is a single pane of glass for all data, virtualized and contextualized for a broader range of business users to work with. The trade-off is that this sudden visibility can be daunting for DX teams newly tasked with untangling all the previously unseen dependencies, vulnerabilities, governance issues, and compliance or security gaps that suddenly appear. All three approaches remain represented in today’s marketplace for organizations to choose from. And while the calculus for making the choice will vary for each company based on their DX goals and level of technical expertise, a common ingredient to success is to prioritize scalable and repeatable processes through automation and low-code wherever possible. ... Choosing the right underlying data architecture is an ongoing balance of matching the pros and cons of the approach to the specific business and operational needs of the organizations. 


A license to trust: Can you rely on 'open source' companies?

Amanda Brock, OpenUK's CEO, which doesn't have a horse in the IaaS race, appeared disappointed with the company's move. "HashiCorp has always been a true open source company, and what Mitchell Hashimoto and Armon Dadgar achieved from a project never intended to be commercialized has been incredible." Brock then asks, "Taking it to an IPO and seeing Mitchell have the apparent wisdom to step aside and allow a more experienced individual to run HashiCorp – but has that also led to its downfall as an open source company?" Her answer is yes. "The statements about BSL are sadly open-washing. It would be wrong to suggest these two ever intended a bait and switch, but they have indeed switched away from open source. The pressure of enabling their competitors with their innovations – an inevitability of open source – did not align with the need to generate shareholder value." That led her to another, bigger question: How much money is enough? Is a lot of money with others generating a lot of money, too, a reason to stop?" She's left "wondering whether had Mitchell remained CEO, this would have occurred?"


Culture Transformation: What leaders need to know

Fortunately, culture only appears enigmatic: There are practical, tangible, measurable ways leaders can properly manage their culture. And it all starts with alignment. Executives need to be on the same page with their leadership teams -- particularly CHROs -- about where their culture stands today and where it’s headed in the future. You might be thinking: “We’re already aligned about our culture.” But it’s not enough to be generally on the same page. The best leaders are synchronised on specific, seemingly small details about their culture and how they affect performance. In one of our client organisations, the goal of being a high-performance culture is behind all decisions. Every leadership meeting keeps high-performance front and centre in their conversation. For instance, leaders might be on the same page about the core values and beliefs -- such as customer-centricity or excellence in safety outcomes -- that they want their culture to embody. But the best path to excellence varies tremendously by industry, market segment, product and more. 



Quote for the day:

"Success is not a random act. It arises out of a predictable and powerful set of circumstances and opportunities." -- Malcolm Gladwell

Daily Tech Digest - August 18, 2023

Though simpler, India’s data privacy law is stricter than GDPR in some ways

If you think this is all a tough ask, you should know that the law is simpler and less prescriptive than data privacy laws in many countries. This kind of simpler law is appropriate for a country like India for two reasons – one, because India is just starting down the road of data privacy compliance and two – because India has a huge SME sector that would struggle to comply with a more complex law. At the same time, the law is stricter than GDPR in some ways; for example, in the EU, a business that can develop a case for it having a “legitimate interest” to process personal data can do so without consent. This is largely not possible in India. Further, in the EU, a data breach needs to be reported only to the regulator and individuals only where the data fiduciary concludes that the breach could result in a risk to the rights and freedoms of the individual. The government has given itself the power to exempt classes of data fiduciaries from provisions of the law. This includes start-ups, which have been specifically mentioned. 


Exploring Differences Between Diversity and Inclusion

At an organizational level, both diversity and equity can be addressed through recruitment processes, but inclusivity is the most challenging and up to the company as a whole, including all employees. One of the ways to encourage employees to adopt inclusive behavior is through the power of education. When people understand why change is important, they are often more inclined to respond. The word “inclusive” is not a new concept—however, sometimes it is referred to with little substance. Workplaces say they are inclusive because they have a diverse representation of employees, but when you ask the minority groups in that organization if they feel heard, the answer is often conflicting. Rather than playing the game, they feel as though they are mascots or warming the bench. When employees realize inclusion means making sure minority groups feel like they belong, it allows them to assess and challenge their own personal bias, which may be preventing them from fully embracing all perspectives.


Breached for years: How long-term cyber attacks are able to linger

The first step for any cyber criminal looking to pull off a years’ long hack is find a way into a target’s network. Even when organizations make it difficult, there’s usually one entry point. Whether by using initial access brokers (IABs), exploiting vulnerabilities, or using employee credentials – the most effective of the three – they need to get in without tripping any alarms. During the early days of a breach, hackers will do very little other than observe a business and how its people work. They’ll learn all the different processes that staff execute during a typical workday and use that knowledge to mask their movements around the network. There will be no intrusive actions (data exfiltration, vulnerability exploits, lateral movements) until they know how to blend in with everyday traffic being triaged by the organization’s security operations center (SOC) analyst. Attackers usually indulge in one of two methods to remain undetected for extended periods of time. The first is when they use genuine compromised credentials and mimic that employee’s usual behavior 


Tech leaders weigh in on the upside and flipside of generative AI

So if projects are already getting off the ground, what are feelings about where generative AI works best, and how? “The best practises are undoubtedly cross-functional collaboration, ‘try before you buy,’ and learn from what you do,” says Marc O’Brien, CIO at radiology healthcare service provider Medica Group. “In my experience, the algorithms from reputable firms do what they say on the tin but what really matters is where you position in the workflow.” Team Teach’s Ivell believes companies can gain a fast start by using tools being built into applications and suites. “One of the key and immediate opportunities of generative AI is it’s already being built into some tools we already use, be that Power BI, Adobe or more industry-specific apps,” he says. “To take advantage of these needs some internal discovery or analysis of these new functions, understanding how we’d use them, and, in the first instance, training our staff how to exploit the new features. People tend to use tools in the way they always have, and adoption of new features can be slow, so we need to accelerate that.”


6 best practices to defend against corporate account takeover attacks

It’s important to have strong multifactor authentication around all corporate accounts, says Bryan Willett, CISO at Lexmark. "What we’re finding with some of the latest phishing services that are out there, such as EvilProxy, is that they’re getting very good at imitating a login screen that looks just like your corporate login screen and your corporate MFA challenge," Willett says. "And the user has the potential of falling victim to that and sharing their MFA." ... Organizations should also implement contextual access management that considers a user’s current location, the device being used, time of access, network environment, behavior patterns, and other contextual information, according to Halstead. "By doing so, the risk of unauthorized access, often exploited in corporate account takeovers, can be significantly minimized," he says. ... Employee education and awareness are critical, says Halstead. This "human firewall" remains a very important defense in preventing corporate account takeovers.


Make Data Security Training Fun and Engaging with These Tips

What are your employees most interested in? What’s most likely to capture their attention? If you don’t know, ask. Gather insights from employees to identify their current concerns and interests and integrate those into the content. Consider how you could leverage their personal interests in your storytelling approach. For instance, if you have a large base of avid football fans, how might a Super Bowl-themed story or challenge related to data security help capture their interest? Ensure accuracy while entertaining: learning outcomes need to take center stage in your communication efforts, of course. Strive to provide accurate information about cybersecurity and employees’ roles in helping to protect systems and data, while integrating some fun into the delivery of the content. ... Good stories have a protagonist (in this case, the employees), an antagonist (cybercriminals), and some tension that leads to a climax in the plotline. Use these elements to create content that entertains while also illustrating the tangible outcomes and repercussions of poor data security practices, like the potential damage to personal and professional relationships.


Robotic Process Automation: Is Your Job at Risk?

One thing is certain: Change is inevitable. Hairston points to The World Economic Forum’s The Future of Jobs Report 2023, which estimates that 44% of workers’ skills will be disrupted over the next five years. The current pace of technology evolution is transforming jobs faster than ever. IT can either be a key facilitator of the change or a recipient of the change. “In the former case, IT can push the business toward RPA and other automation technologies that are designed to be used by business,” he says. “This will help companies achieve their most strategic objectives and view IT as more of a partner.” As RPA and AI gain stronger footholds, the only way forward is to help displaced team members reskill and upskill, Zhao says. “Fortunately, many online training platforms are available at affordable costs.” To expedite learning, he advises organizations to develop curated content that employees can freely access. Zhao notes that such content should be relevant to both the work being phased out as well as to the tasks that team members will need moving forward. Executive-level sponsorship of any intelligent automation strategy is essential for long-term success.

We think that as general-purpose robots are becoming more common—and they are—people could misuse them. You can find videos online showing how easy it is to attach a weapon to a mobile robot. So, there’s a reasonable concern about who will have access to robots and what they can do with that access. We want to make sure that there will be some regulation around this—and lead the charge in getting it put into place. Policymakers need to get engaged and be informed about the capabilities of the robots, as well as the potential dangers. We are being vocal about our anti-weaponization stance: robots should not cause harm, nor should they impinge upon anybody’s privacy. The industry that we’re hoping to build only exists if people trust robots. If they’re afraid of them, then that’s going to be a problem. ... By managing the final assembly ourselves, we have better control over the quality and cost, and it helps us to rapidly iterate. One of the things we have learned is that when you iterate your design and work with a partner to do the assembly, the communication challenges are pretty thick. 


The Architect’s Guide to Thinking about Hybrid/Multicloud

While most people will tell you complexity is the hardest thing to manage in a multicloud environment, the truth is that consistency is the primary challenge. Having software that can run across clouds (public, private, edge) provides the consistency to manage complexity. Take object storage. If you have a single object store that can run on AWS, GCP, Azure, IBM, Equinix or your private cloud, your architecture becomes materially simpler. Consistent storage and its features (replication, encryption, etc.) enable the enterprise to focus on the application layer. Consistency creates optionality, and optionality creates leverage. Reducing complexity can’t come at some unsustainable cost. By selecting software that runs across clouds (public, private, edge) you reduce complexity and you increase optionality. If it’s cheaper to run that workload on GCP, move it there. If it’s cheaper to run that database on AWS, run it there. If it’s cheaper to store your data on premises and use external tables, do that.


Observability – everything you need to know

The first thing you need to do on any of these platforms is to get your data into it. Historically, for log analytics solutions, like Splunk, that was relatively easy. I don’t mean to trivialise this, but you would grab logs from all of your infrastructure and send those back to Splunk and we process those. You would usually deploy an agent to do it. For observability solutions – not just ours, but any of them – you need more data. In addition to those logs that you capture from each host, you also need system metrics and application metrics and profiles and distributed traces and everything else. There are additional layers of complexity here. Now you’re not just capturing human readable logs from operating systems, you’re capturing all these other types of data from the individual applications that people have written. That requires hooks into all of the hundreds of thousands of libraries that software developers use. I think that historically has held back this industry to a fairly large degree. We rely on a project that I co-founded with a number of other people, and a number of other companies.



Quote for the day:

"I believe it is important for people to create a healthy mental environment in which to accomplish daily tasks." -- Darren L. Johnson

Daily Tech Digest - August 17, 2023

What would an OT cyberattack really cost your organization?

Attacks on industrial control systems (ICS) may not be just about ransomware or accessing information but about deliberately making machines misbehave. Attackers can exploit vulnerabilities to make machines overheat, or robotic arms swing unpredictably. A failed attack on a water utility in Florida attempted to raise the amount of lye in the drinking water; success might have killed thousands. ... When operations in your factory, plant, or substation shut down, revenue will cease. So, an important question not just for the CISO, but for Operations, Finance, and other chiefs is how long you can go without the expected revenue that you may never see?  ... There will be significant damage to an organization's public reputation as news of an attack gets out. The customer trust that took years to build may be gone in an instant, and customers forced to find another supplier while you're shut down may not come back. After all, your shutdown not only inflicted damage to companies further down the chain, it may also have created an impression that you were careless in letting it happen.


The Risk of Quantifying Cyberrisk

Legal concerns could stem from the nature of risk quantification. This process is designed to uncover problems with an actionable amount of detail. Anything that is discoverable in a legal proceeding can find its way into a court case and embarrassing fallout may ensue. The fear is that the very detailed CRQ risk assessment results will be made public. For many organizations that have not adopted CRQ, such results may include lists of broken or missing controls and audit results, all with corresponding verbal risk labels (e.g., high, medium, low). They could (and really should) also include a list of scenarios with the same risk labels attached to them. These results alone could be damning to some organizations. Specific CRQ concerns stem from having all of these elements tied to a potential amount of loss and frequency. However, it is difficult to imagine a court proceeding where strictly qualitative results would allow an organization to walk free.


The CISO Report – The Culture Club

The report highlighted a number of key challenges facing organizations in the EMEA, which are clearly now being discussed in the C-Suite. These challenges include the level of regulatory compliance that organisations now face, especially those operating in these regions. In my opinion, the General Data Protection Regulation (GDPR) is still a massively misunderstood piece of legislation that organisations need help with, yet, the C-Suite recognises the importance of it. Added to this is the ongoing threat cybercrime, as organisations large and small are facing an increasing number of cyberattacks, including ransomware attacks, data breaches, and Distributed Denial of Service (DDoS) attacks. ... To embed cybersecurity and data protection within an organisation, you do not look to build a security culture, but rather, you look to build a culture that respects the importance of Security. This is a simple, yet profound distinction. Every organization possesses a culture, which might either emerge naturally or be intentionally and meticulously developed. Regardless of its origins, the influence of this culture on an organization remains undeniable.


AI for Data Management: An Old Idea with New Potential

No matter how you choose to leverage AI in the data management space — whether you're using AI for more basic needs or you're taking advantage of next-generation AI technologies — your goal should be to identify ways that AI can accelerate workflows and reduce toil for data engineers. Much of the work that data engineers perform on a daily basis can be tedious and time-consuming. Converting data from one format to another by hand could take enormous amounts of time and is a boring task, to put it mildly. So is sifting through vast volumes of information to find data quality issues like redundant or empty cells. Even if you leverage tools to help search and sort data automatically, you're still likely to find yourself investing an inordinate amount of time on data quality if you have to write complex queries by hand to detect quality problems. But if you can substitute AI-based workflows for these tasks, you save yourself a lot of time and labor. 


Low-code and no-code: Meant for citizen developers, but embraced by IT

Low and no-code continue to gain popularity because organizations "are realizing that these tools are not just for early-stage or beginner citizen developers but also for sophisticated, senior developers to save them valuable time and effort," says Pulijala. "Low-code/no-code helps, whether it's addressing talent shortages or freeing up other developers' time. With low-code/no-code solutions, a junior product manager can build a basic prototype, freeing up more senior engineers to focus on customized, higher code solutions. In addition to mitigating talent shortages, low-code/no-code tools improve business agility and contribute to cost savings since it significantly reduces hiring costs and application maintenance costs." ... "While no-code solutions are built from the point-of-view of a non-developer user, they will at times still require professional IT intervention. Enterprise applications can be complex and outages can happen, requiring IT to step in to triage and get things running again."


Multiple Flaws Uncovered in Data Center Systems

Data center equipment and infrastructure solutions provider CyberPower's PowerPanel Enterprise DCIM platform allows information technology teams to manage, configure and monitor the infrastructure within a data center through the cloud, serving as a single source of information and control for all devices. "These platforms are commonly used by companies managing on-premises server deployments to larger, co-located data centers - like those from major cloud providers AWS, Google Cloud and Microsoft Azure," the researchers said. Dataprobe manufactures power management products that assist businesses in monitoring and controlling their equipment. The iBoot-PDU allows administrators to remotely manage the power supply to their devices and equipment via a "simple and easy-to-use" web application, according to the researchers, who added that the devices are "typically found in small to midsized data centers and used by SMBs managing on-premises server deployments."


Hybrid mesh firewall platforms gain interest as management challenges intensify

"A hybrid mesh firewall makes you highly dependent on one single vendor," says John Carey, managing director of the technology solutions group at global consulting firm AArete. "Some organizations prefer to have best-of-breed and select the right tool for the right job. You'll see CrowdStrike running alongside CyberArk running alongside Juniper running alongside Cisco. You don't see many organizations doing a blanket removal, taking out all those tools and putting in one. It's costly, and they don't want to be totally dependent on that one vendor." With a hybrid mesh firewall only able to manage firewalls from that one vendor, that could be a problem for those companies. Alternatively, an enterprise can use an NSPM product from a vendor such as Tufin or Firemon, says Scott Wheeler, cloud practice leader at Asperitas Consulting, an IT and cloud services firm. "They are not firewall products, but they do enable the concept of hybrid mesh firewall. So, depending on how you look at the semantics, they are more of a hybrid mesh firewall solution because you can manage across different firewall providers."


Why the cyber skills crisis is an opportunity to transform your cybersecurity

A strategic approach is needed for security leaders and their teams to address the resource crisis. A key response emerging in the market is security vendor consolidation. According to Gartner, 75% of organizations were pursuing consolidation in 2022, almost tripling since 2020. Considering that an alarming 35% of cyber budgets are being spent on tools that don’t give a measurable improvement in cybersecurity posture, it’s evident why businesses are seeking to consolidate and do more with less. However, there is a degree of caution around consolidating vendors and tools. Nearly four in five security leaders and decision-makers admitted to being concerned that consolidation will reduce their ability to mitigate cyber risk. But we found this skepticism to be unfounded. In reality, half of those who have begun consolidating have seen an improvement in security posture as a result. This is because, when approached strategically, consolidation streamlines security operations. 


Industrial modernization: Becoming future-ready in uncertain times

Future-ready companies have already embraced agile practices and distributed computing technologies like edge computing, containers, and microservices to optimize existing systems and drive innovation. IT modernization is the practice of updating older software and infrastructure to newer computing approaches, including languages, frameworks, architectures, and infrastructure platforms. It does not require wholesale replacement> if done well, modernization can extend the lifespan of an organization’s software and infrastructure while taking advantage of recent innovation. While the term legacy may have a negative connotation in technology, these systems are often the bedrock of a company’s business operations. Modern, cloud-native computing paradigms are distributed by nature. Modernization shifts the technology stack from a tightly coupled, hierarchical, siloed, and point-to-point structure to one that is application-driven, loosely coupled, software-defined, and integrated across all layers of the architecture.


Interrogate Your Software with AI — The Future for SREs

With AI-driven incident analysis, we gain the capability to process data rapidly and recognize correlations that otherwise might have been overlooked. This empowers us to take proactive measures and predict potential incidents using historical data, breaking free from the limitations of reactive maintenance. Moreover, AI-powered analysis can play a vital role in assisting SREs in determining the severity of incidents. By defining criteria for incident severity classification and relying on AI insights, we can make more informed decisions and prioritize response efforts efficiently. Resource allocation, a crucial aspect of SRE, can be guided by AI-generated statistics that paint a clear picture of an incident’s impact and resource requirements, enabling us to scale responses based on severity and complexity. Finally, we can’t forget about incident reports, documentation and runbooks. We all know how bad those can be. Depending on who triaged the incident, what’s reported and documented can range from a simple paragraph to pages of in-depth research and analysis. 



Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray

Daily Tech Digest - August 16, 2023

The looming battle over where generative AI systems will run

What is becoming more apparent is that the location where most generative AI systems will reside (public cloud platforms versus on-premises and edge-based platforms) is still being determined. Vellante’s article points out that AI systems are running neck-and-neck between on-premises and public cloud platforms. Driving this is the assumption that the public cloud comes with some risk, including IP leakage, or when better conclusions from your data appear at the competition. Also, enterprises still have a lot of data in traditional data centers or on edge computing rather than in the cloud. This can cause problems when the data is not easily moved to the cloud, with data silos being common within most enterprises today. AI systems need data to be of value, and thus it may make sense to host the AI systems closest to the data. I would argue that data should not exist in silos and that you’re enabling an existing problem. However, many enterprises may not have other, more pragmatic choices, given the cost of fixing such issues. 


Quantum Computing: Australia’s Next Great Tech Challenge & Opportunity

One of the big opportunities for Australia in this space will be its close relationship with the United States. Because of the sheer value of quantum computing research and technology across both military and civilian IP, nations tend to be more circumspect about sharing information in comparison to conventional technology. The downside to this is that it means the U.S. isn’t able to draw on the same global pool of talent that it’s used to. A shortage of talent isn’t such a major issue in regular computing fields because global talent tends to pool and openly share information. ... “As other nations push forward, Australia risks missing out on the potential economic benefits,” a report by the University of Sydney notes. “We could also lose talented workers to countries that are investing more in quantum research. “Projects like the ambitious attempt to build the world’s first complete quantum computer aim to provide local opportunities and funding alongside their top-line goals. Moreover, Australia has a responsibility to ensure quantum technologies are developed and used ethically, and their risks managed.”


Q&A: An Introduction to Streaming AI

Streaming AI is about continuously training ML models using real-time data, sometimes with human involvement. The incoming data streams from many sources are analyzed, combined with contextual information, and matched against features that carry condensed information and intelligence specific to the given problem. ML algorithms continually generate these features using the most current data available. On the other hand, as noted earlier, generative AI focuses on generating responses based on a “seed” and then a pattern for finding the next thing to tack on. This works to generate content that conforms to certain parameters the model has “learned.” It is bounded, but not in a way that the boundaries can be easily understood. Until the recent rise of LLMs, considerable effort was invested in making ML models explainable to humans. The question was: how does the model arrive at its result? The “I have no idea” response is hard for humans to accept. In the made-up legal case citations example, the LLM program generated a motion that argued a point, but when asked to explain or validate its path, it just made some stuff up.


CISO’s role in cyber insurance

Enter cyber insurance, a safety net that offers organisations a way to mitigate the financial impact of these cyber incidents. However, navigating the complex landscape of cyber insurance is no small feat. This is where the Chief Information Security Officer (CISO) comes into play. As the vanguard of an organisation’s cybersecurity efforts, the CISO not only ensures that digital fortresses are robust but also plays a pivotal role in the realm of cyber insurance. Their expertise and insights are instrumental in assessing risks, selecting the right coverage, and ensuring that the organisation gets the most out of its policy. In essence, the CISO bridges the gap between the technical world of cybersecurity and the financial realm of insurance, ensuring that businesses are both well protected and well insured. ... As the primary custodian of an organisation’s cybersecurity posture, the CISO is responsible for conducting a thorough risk assessment. This involves identifying potential vulnerabilities, assessing the potential impact of different types of cyber incidents, and estimating the financial costs associated with these incidents.


Bolstering Africa’s Cybersecurity

In recent weeks and months, we have seen opportunities arise, often provided by academia and government, to improve cyber education. However, some parts of Africa are still without decent levels of electricity. So, is the dream of cyber education for all unattainable? ... Despite this, Africa-based data security analysts point out that a dearth of qualified technicians coupled with a lack of investment in cybersecurity has been the direct contributor to a growth in the amount and scale of successful cyberattacks. In fact, according to research from IFC and Google, Africa’s e-economy is expected to reach $180 billion by 2025, but its lack of security support could halt that growth. Most of these campaigns are based upon spam or phishing efforts derived from information garnered from open source intelligence (OSINT), which is often more effective against a remote workforce that may be more exposed to attack techniques while outside of the technical and administrative controls of traditional office work.


Everything Can Change: The Co-Evolution of the CMO and the CISO

Organizations with an established partnership between the CISO and CMO tend to outperform their competitors. This collaboration allows for a cohesive approach to risk management and brand protection, resulting in increased customer trust and loyalty. Organizations that view the CISO purely as a technical operational leader often struggle with cybersecurity initiatives and fail to align security measures with business goals. This approach limits the potential for strategic contributions from the CISO in driving revenue growth and defending value. On the other hand, organizations that integrate the CISO into the go-to-market strategy leverage their expertise to address security concerns proactively, enhancing customer trust and differentiating themselves from competitors. By combining security practices with marketing efforts, these organizations can communicate their commitment to data protection and establish a competitive advantage in terms of trustworthiness. Effective CISOs have a seat at the executive table, allowing them to more directly align security initiatives with business outcomes. 


Machine unlearning: The critical art of teaching AI to forget

Machine unlearning is the process of erasing the influence specific datasets have had on an ML system. Most often, when a concern arises with a dataset, it’s a case of modifying or simply deleting the dataset. But in cases where the data has been used to train a model, things can get tricky. ML models are essentially black boxes. This means that it’s difficult to understand exactly how specific datasets impacted the model during training and even more difficult to undo the effects of a problematic dataset. OpenAI, the creators of ChatGPT, have repeatedly come under fire regarding the data used to train their models. A number of generative AI art tools are also facing legal battles regarding their training data. Privacy concerns have also been raised after membership inference attacks have shown that it’s possible to infer whether specific data was used to train a model. This means that the models can potentially reveal information about the individuals whose data was used to train it.


Unit Tests Are Overrated: Rethinking Testing Strategies

Unit tests fare much more poorly with this metric than most people realize. The first problem is that they often don’t provide useful information about the actual state of the system under review. When unit tests are written as acceptance tests, they are often intricately coupled with the specific implementation. They will only fail if the implementation changes, not when changes break the system (e.g., verifying the value of a class constant). Using acceptance tests as regression tests must be done intentionally and thoughtfully, deleting everything that does not provide useful information about the system’s behavior. Another major problem with unit tests is that to test the inputs of one method, you often need to mock out the responses from other methods. When you do this, you are no longer testing the system you have, you are testing a system that you assumed you had in the past. The system can break and a unit test will not fail because it had an assumption that an input would be received that the real-world system no longer supplies. 


The vital role the CISO has to play in the boardroom

Cybersecurity risk management and information governance are complex and gritty subjects which can be hard to follow for the uninitiated. Boardrooms aren’t the place for the ins and outs of the issue at hand. Learning to communicate effectively is possibly the single most important skill for aspiring and ambitious CISOs. Throughout history, great leaders have demonstrated an excellent ability to communicate, bringing people on a journey with them and gathering support along the way. This is not about dumbing down or glossing over the important parts. Rather, it’s about honing a fundamental business skill: being able to make a compelling argument clearly and concisely. You need to be able to translate critical cybersecurity information into business objectives. Cybersecurity risk management is a regulated requirement. Board directors, officers and senior management can be held liable for the decisions they make around cybersecurity risks and incidents. Clear and effective communication is critical in supporting organisations to make the right decisions that could be later relied upon to protect its people.


3 strategies that can help stop ransomware before it becomes a crisis

Without an incident response plan in place, companies typically panic, not knowing who to call, or what to do, which can make paying the ransom seem like the easiest way out. With a plan in place, however, people know what to do and will ideally have practised the plan ahead of time to ensure disaster recovery measures work the way they're supposed to. ... Having multiple layers of defense, as well as setting up multifactor authentication and data encryption, are fundamental to cybersecurity, but many companies still get them wrong. Stone recently worked with an educational organization that had invested heavily in cybersecurity. When they were hit by ransomware, they were able to shift operations to an offline backup. Then the attackers escalated their demands -- if the organization didn’t pay the ransom, their data would be leaked online. “The organization was well prepared for an encryption event, but not prepared for the second ransom,” Stone says. “There was actual sensitive data that would trigger a number of regulatory compliance actions.”



Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer