Daily Tech Digest - March 11, 2024

Generative AI is even more of a mixed bag when it comes to writing secure code. Many hope that, by ingesting best coding practices from public code repositories — possibly augmented by a company’s own policies and frameworks — the code AI generates will be more secure right from the very start and avoid the common mistakes that human developers make. ... Generative AI has the potential to help DevSecOps teams to find vulnerabilities and security issues that traditional testing tools miss, to explain the problems, and to suggest fixes. It can also help with generating test cases. Some security flaws are still too nuanced for these tools to catch, says Carnegie Mellon’s Moseley. “For those challenging things, you’ll still need people to look for them, you’ll need experts to find them.” However, generative AI can pick up standard errors. ... A bigger question for enterprises will be about automating the generative AI functionality — and how much to have humans in the loop. For example, if the AI is used to detect code vulnerabilities early on in the process. “To what extent do I allow code to be automatically corrected by the tool?” Taglienti asks. 


White House Advisory Team Backs Cybersecurity Tax Incentives

Technology trade groups and cybersecurity experts have long called for financial incentives to help drive the implementation of new cybersecurity standards, but proposals differ on how to best encourage industries to prioritize cybersecurity investments. A white paper published in 2011 by the U.S. Chamber of Commerce, the Center for Democracy and Technology and other industry groups urged the federal government to focus on cybersecurity incentives over mandates, warning that "a more government-centric set of mandates would be counterproductive to both our economic and national security." In April 2023, the Federal Energy Regulatory Commission approved a rule allowing utility companies to include cybersecurity spending as part of their calculation for settling rates. FERC acting Chairman Willie Phillips said at the time that financial incentives must accompany federal mandates "to encourage utilities to proactively make additional cybersecurity investments in their systems." While the FERC rule allows utilities to recover cybersecurity expenses through customer rates, the NSTAC model suggests providing tax incentives upfront so critical infrastructure operators pay less when they spend money on enhanced cybersecurity standards.


Continuous Delivery: Gold Standard for Software Development

In the context of CD, developers must be able to easily and quickly understand why a product or update has failed. Given that between 50% and 80% of updates to software fail, developers need to be able to rapidly identify the exact point of failure and resolve it. This reduction in incident resolution time — or bug fixing — is one of the significant benefits of developers consistently working toward the metric of releasability. This means that when problems arise, they are easy to fix and recovery cycles are quick. To meet increasingly quick development targets, developers need to find ways to reduce the time they spend on incident response and troubleshooting. To help with this, they need access to real-time insights that allow them to identify, diagnose and resolve any incidents as they arise. These insights can give developers an instant, digestible understanding of how changes affect their software development pipelines, even when changes may not be significant enough to cause an incident. These “change events” offer a trail of breadcrumbs through every change made to a product throughout its development cycle, allowing developers to see the direct effects of each update. 


Transitioning to memory-safe languages: Challenges and considerations

We encourage the community to consider writing in Rust when starting new projects. We also recommend Rust for critical code paths, such as areas typically abused or compromised or those holding the “crown jewels.” Great places to start are authentication, authorization, cryptography, and anything that takes input from a network or user. While adopting memory safety will not fix everything in security overnight, it’s an essential first step. But even the best programmers make memory safety errors when using languages that aren’t inherently memory-safe. By using memory-safe languages, programmers can focus on producing higher-quality code rather than perilously contending with low-level memory management. However, we must recognize that it’s impossible to rewrite everything overnight. OpenSSF has created a C/C++ Hardening Guide to help programmers make legacy code safer without significantly impacting their existing codebases. Depending on your risk tolerance, this is a less risky path in the short term. Once your rewrite or rebuild is complete, it’s also essential to consider deployment.


Personalised learning for Gen Z: How customised content is reshaping education

As no two students possess the same skills, learning gaps and future goals, a range of personalised learning methods is necessary. This includes adaptive and blended learning, together with student-directed and project-based learning. Thereby, students imbibe lessons more speedily and effectively while retaining them longer. Conversely, traditional learning is based on physical classroom learning and standard curricula. It’s also time-consuming and cumbersome, with a one-size-fits-all approach that overlooks individual needs. Given the numerous mandatory textbooks and reading material, it’s expensive, unlike the more cost-effective e-learning modules. Additionally, technology facilitates the delivery of customized content via small videos and other bite-sized content more suitable for tech-savvy Gen Zs. With instant access to information that facilitates shopping, travel and more, these youthful groups hold the same expectations regarding learning. As a result, Gen Zs like consuming information via videos, podcasts or personalised learning modules that may be accessed later. 


Agile Architecture, Lean Architecture, or Both?

Creating an architecture for a software product requires solving a variety of complex problems; each product faces unique challenges that its architecture must overcome through a series of trade-offs. We have described this decision process in other articles in which we have described the concept of the Minimum Viable Architecture (MVA) as a reflection of these trade-off decisions. The MVA is the architectural complement to a Minimum Viable Product or MVP. The MVA balances the MVP by making sure that the MVP is technically viable, sustainable, and extensible over time; it is what differentiates the MVP from a throw-away proof of concept. Lean approaches want to look at the core problem of software development as improving the flow of work, but from an architectural perspective, the core problem is creating an MVP and an MVA that are both minimal and viable. One key aspect of an MVA is that it is developed incrementally over a series of releases of a product. The development team uses the empirical data from these releases to confirm or reject hypotheses that they form about the suitability of the MVA. 


How generative AI will change low-code development

“Skill sets will evolve to encompass a blend of traditional coding expertise, along with proficiency in utilizing low/no-code platforms, understanding how to integrate AI technologies, and effectively collaborating in teams using these tools,” says Ed Macosky, chief product and technology officer at Boomi. “The combination of low code alongside copilots will allow developers to enhance their skills and focus on supporting business outcomes, rather than spending the bulk of their time learning different coding languages.” Armon Petrossian, CEO and co-founder of Coalesce, adds, “There will be a greater emphasis on analytical thinking, problem-solving, and design thinking with less of a burden on the technical barrier of solving these types of issues.” Today, code generators can produce code suggestions, single lines of code, and small modules. Developers must still evaluate the code generated to adjust interfaces, understand boundary conditions, and evaluate security risks. But what might software development look like as prompting, code generation, and AI assistants in low-code improve? “As programming interfaces become conversational, there’s a convergence between low-code platforms and copilot-type tools,” says Srikumar Ramanathan, chief solutions officer at Mphasis.


Is It Too Late for My Organization to Leverage AI?

The short answer is no, but a pragmatic approach to adopting AI is becoming increasingly valuable. ... The key to efficient AI implementation is caution and planning. Leaders must assess their enterprise’s organizational, operational, and business challenges and use those findings to guide an intelligent AI strategy.Organizationally, successful AI implementation requires interdepartmental collaboration and training. Stakeholders -- including leaders and the daily drivers of productivity -- should understand the benefits of AI implementation. Otherwise, employee anxieties or misinformation might impede progress. Operational challenges to AI deployment include inefficient manual processes and a lack of standardization. Remember, AI is not a silver bullet for resolving existing tech inefficiencies. Before implementation, leaders must assess their tech stack, ensuring that all relevant software is in conversation with one another. From a business perspective, unclear AI use cases are a recipe for disaster. AI and machine learning (ML) investments should have specific KPIs. Furthermore, all investments should take a phased approach that prioritizes a solid data foundation before deployment.


Has the CIO title run its course?

“It’s time for the rest of organizations to recognize there is not a single CIO role anymore but layers of CIOs,’’ he says. The chief of technology needs to be a digital leader “and that’s why the name is so important.” While acknowledging that every company is different, Wenhold says if he were on the outside looking in at a senior executive meeting, “the person sitting there with the CBTO title isn’t talking about keeping the lights on, and the internet connection up, and what technologies we’re using. They’re talking about how is the business absorbing the latest deployment into production.” The person responsible for keeping the lights on should be a director, he adds, and “I don’t see that role at the table.” Although technology’s role has been widely elevated in most companies across all industries, Wenhold believes it will take some time for other organizations to understand what the CBTO role can and should be. “I still believe we have a lot of work to do in the industry. The CIO name is more important to your peers than to the person holding the title,’’ he maintains. Sule agrees, saying that the CBTO title is effective because it helps to “blur the lines” between technology and business and instills a sense that everyone in Sule’s department is there to serve the business.


Japan Blames North Korea for PyPI Supply Chain Cyberattack

"This attack isn't something that would affect only developers in Japan and nearby regions, Gardner points out. "It's something for which developers everywhere should be on guard." Other experts say non-native English speakers could be more at risk for this latest attack by the Lazarus Group. The attack "may disproportionately impact developers in Asia," due to language barriers and less access to security information, says Taimur Ijlal, a tech expert and information security leader at Netify. "Development teams with limited resources may understandably have less bandwidth for rigorous code reviews and audits," Ijlal says. Jed Macosko, a research director at Academic Influence, says app development communities in East Asia "tend to be more tightly integrated than in other parts of the world due to shared technologies, platforms, and linguistic commonalities." He says attackers may be looking to take advantage of those regional connections and "trusted relationships." Small and startup software firms in Asia typically have more limited security budgets than do their counterparts in the West, Macosko notes. 



Quote for the day:

"After growing wildly for years, the field of computing appears to be reaching its infancy." -- John Pierce

Daily Tech Digest - March 10, 2024

What’s the privacy tax on innovation?

A few decades ago, California had one of the strongest definitions for certifying Organic foods in the US. Eventually, the US government stepped in with a watered-down definition. Despite the pain of new privacy controls, the US data broker industry will lobby for a similar approach to at least harmonize privacy regulations at the Federal level that limit the impact on their business models when operating across state lines. For businesses and consumers, a more equitable approach would be to add a few more teeth to the cost of data misuse arising from legal sales, employee theft, or breaches. A few high-profile payouts arising from theft or when this data is used as part of multi-million dollar ransomware attacks on critical business systems would have a focusing effect on better privacy management practices. Another option is to turn to banks as holders of trust. Banks may be a good first point for managing the financial data we directly share with them. But what about all the data that others gather that may not be tied to traditional identifiers like social security numbers (SSN) used to unify data, such as IP addresses, phone numbers, Wi-Fi hubs, or the trail of GPS dots that gravitate to your home or office?


Living with the ghost of a smart home’s past

There were the window shades that always opened at 8AM and always closed at sundown. My brother disconnected everything that looked like a hub, and still, operating on some inaccessible internal clock, the shades carried on as they were once programmed to do. ... This is the state of home ownership in 2024! People have been making their homes smart with off-the-shelf parts for well over a decade now. Sometimes they sell those homes, and the new homeowners find themselves mired in troubleshooting when they should be trying to pick out wall colors. Some former homeowners will provide onboarding to the home’s smart home system, but most do as the guy who used to own my brother’s house did. They walk away and leave it as an adventure for the next person. ... I really hope the new renters of my old Brooklyn walk-up appreciate all the 2014 Philips Hue lights I left installed in the basement. There’s a calculus you make as you’re moving. It’s a hectic time, and there’s a lot to be done. Do you want to spend half the day freeing all those Hue bulbs from their obnoxious and broken recessed light housings, or do you want to leave a potential gift for the next homeowner and get started on nesting in your new place? 


Overcoming the AI Privacy Predicament

According to one study by Brookings, while 57% of consumers felt that AI will have a net negative impact on privacy, 34% were unsure about how AI would affect their privacy. Indeed, AI evokes a mixed set of thoughts and emotions in consumers. For most people, the promise of AI is clear: from increasing efficiency, to automating mundane tasks and freeing up more time for creative work, to improving outcomes in areas such as healthcare and education. ... In the realm of AI, the lack of trust is significant. Indeed, 81% of consumers think the information collected by AI companies will be used in ways people are uncomfortable with, as well as in ways that were not originally intended. That consumers are put in a seemingly impossible predicament regarding their privacy leaves them little choice but to a.) consent, or b.) forgo use of the product or service. Both choices leave consumers wanting more from the digital economy. When a new technology has negative implications for privacy, consumers have shown they are willing to engage in privacy-protective behaviors, such as deleting an app, withholding personal information, or abandoning an online purchase altogether.


How Static Analysis Can Save Your Software

While static analysis is a means of pattern detection, fixing an actual bug (for example, dereferencing a null pointer) is much harder, albeit possible. It becomes mathematically difficult to track exponentially increasing possible states. We call this “path explosion.” Say you’re writing code that, given two integers, divides one by the other, and there are various failure modes depending on the integers’ values. But what if the denominator is zero? That results in undefined behavior, and it means you need to look at where those integers came from, their possible values and what branches they took along the way. If you can see that the denominator is checked against zero before the division — and branches away if it is — you should be safe from division-by-zero issues. This theoretical stepping through stages of code is called “symbolic execution.” It’s not too complicated if the checkpoint is fairly close to the division process, but the further away it gets, the more branches you must account for. Crossing the function boundary gets even trickier. But once you have calls from other translation units, the problem becomes intractable in the general case. 


Avoiding Shift Left Exhaustion – Part 1

Shift left requires developers to be involved in testing, quality assurance, and collaboration throughout the development cycle. While this is undoubtedly beneficial for the final product, it can lead to an increased workload for developers who must balance their coding responsibilities with testing and problem-solving tasks. ... Adapting to Shift left practices often requires developers to acquire new skills and stay current with the latest testing methodologies and tools. This continuous learning can be intellectually stimulating and exhausting, especially in an industry that evolves rapidly. Developers must understand new tools, processes, and technologies as more things get moved earlier in the development lifecycle. ... The added pressure of early and continuous testing and the demand for faster development cycles can lead to developer burnout. When developers are overburdened, their creativity and productivity may suffer, ultimately impacting the software quality they produce. ... Shifting testing and quality assurance left in the development process may impose strict time constraints. Developers may feel pressured to meet tight deadlines, which can be stressful and lead to rushed decision-making, potentially compromising the software’s quality.


Ransomware Attacks on Critical Infrastructure Are Surging

Especially under fire are critical services. Healthcare and public health agencies dominated, filing 249 reports to IC3 last year over ransomware attacks, followed by 218 reports from critical manufacturing and 156 from government facilities. Ransomware-wielding attackers are potentially targeting these sectors most because they perceive the victims as having a proclivity to pay, given the risk to life or essential business processes posed by their systems being disrupted. Last year, IC3 received a ransomware report from at least one victim in all of the 16 critical infrastructure sectors - which include financial services, food and agriculture, energy and communications - except for two: dams and nuclear reactors, materials and waste. The ransomware group tied to the largest number of successful attacks against critical infrastructure reported to IC3 last year was LockBit, followed by Alphv/BlackCat, Akira, Royal and Black Basta. Law enforcement recently disrupted Alphv/BlackCat, as well as LockBit, after which each group separately claimed to have rebooted before appearing to go dark. 


What’s the missing piece for mainstream Web3 adoption?

Today’s Web3 lacks a unifying ecosystem, causing the market to fracture into multiple, independently evolving use cases. Crypto enthusiasts have to use various decentralized applications (DApps) and platforms to perform multiple transactions and interact with the different sectors of Web3. However, this isn’t a sustainable growth model for the Web3 industry and is more of a deterrent rather than a benefit when it comes to crypto adoption. ... Recognizing the need for a more integrated approach, some Web3 players are moving beyond the hype. Legion Network is emerging as a notable example among these. As a one-stop shop for Web3, Legion Network addresses the complexity of the industry and reaches new audiences. It brings together essential Web3 use cases, including a proprietary crypto wallet with comprehensive portfolio tracking, DeFi swaps and bridges, engaging play-to-earn/win games, captivating quests with prize rewards, a launchpad for emerging projects and a unique SocialFi experience that fosters community engagement.


What’s Driving Changes in Open Source Licensing?

In response to the challenges posed by cloud computing, some vendor-driven open source projects have changed their licenses or their GTM models. For example, MongoDB, Elastic, Confluent, Redis Labs and HashiCorp have adopted new licenses that restrict the use of their software-as-a-service by third parties or require them to pay fees or share their modifications. These changes are intended to protect the revenue and sustainability of the original vendors and to ensure that they can continue to invest in the open source project. However, these changes have also caused some controversy and backlash from the user community, who may feel that the project is becoming less open and more proprietary or that they are losing some of the benefits and freedoms of open source. However, community-driven open source projects have largely maintained their permissive licenses and their collaborative approach. These projects still benefit from the diversity and scale of their user community, who contribute to the development, maintenance, support and security of the software. These projects also leverage the support of organizations and foundations, such as the Linux Foundation, the Apache Software Foundation and the CNCF, who provide governance, funding and infrastructure. 


Botnets: The uninvited guests that just won’t leave

Reducing response time is vital. The longer the dwell time, the more likely it is that botnets can impact a business, particularly given that botnets can spread across many devices in a short period. How can security teams improve detection processes and shrink the time it takes to respond to malicious activity? Security practitioners should have multiple tools and strategies at their disposal to protect their organization’s networks against botnets. An obvious first step is to prevent access to all recognized C2 databases. Next, leverage application control to restrict unauthorized access to your systems. Additionally, use Domain Name System (DNS) filtering to target botnets explicitly, concentrating on each category or website that might expose your system to them. DNS filtering also helps to mitigate the Domain Generation Algorithms that botnets often use. Monitoring data while it enters and leaves devices is vital as well, as you can spot botnets as they attempt to infiltrate your computers or those connected to them. This is what makes security information and event management technology paired with malicious indicators of compromise detections so critical to protecting against bots. 


Are You Ready to Protect Your Company From Insider Threats? Probably Not

The real problem is that employees and employers don’t trust each other. This is an enormous risk for employees, as this environment makes it more likely that insider threats, security risks that originate from within the company, will emerge or intensify when tensions are high and motivations, including financial strain, dissatisfaction or desperation, drive individuals to act against their own organization. That’s the bad news. The worst news is that most companies are unprepared to meet the moment. ... Insider threats often betray their motivation. Sometimes, they tell colleagues about their intentions. Other times, their actions speak louder than words, as attempts to work around security protocols, active resentment for coworkers or leadership or general job dissatisfaction can be a red flag that an insider threat is about to act. Explaining the impact of human intelligence, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) writes, “An organization’s own personnel are an invaluable resource to observe behaviors of concern, as are those who are close to an individual, such as family, friends, and coworkers.”



Quote for the day:

"Leaders must be close enough to relate to others, but far enough ahead to motivate them." -- John C. Maxwell

Daily Tech Digest - March 09, 2024

IT’s Waste Management Job With Software Applications

Shelfware is precisely that: applications and systems that sit on the physical or virtual shelf because nobody uses them. They could even be installed, where they take up storage space. Shelfware doesn’t start out that way. Someone at some point purchased that software because they thought it would address a company's need. Then, through either disappointment with the product or product obsolescence, they find out that the product doesn’t meet their need. There will always be well-intentioned software failures like this in companies, but if IT doesn’t sweep out the debris by getting rid of the software and cancelling contracts, shelfware will continue to show up as an expense in the IT budget. ... There are few more painful software installation issues than system integration, especially when vendors tell you that they have interfaces to your systems, and you discover major flaws in the interfaces that you must manually correct. Complicated integrations set back projects and are difficult to explain to management. If an integration becomes too difficult, the software likely gets dumped, but someone forgets to dump it from the budget.


Securing open source software: Whose job is it, anyway?

"We at CISA are particularly focused on OSS security because, as everyone here knows, the vast majority of our critical infrastructure relies on open source software," Easterly declared in her keynote. "And while the Log4Shell vulnerability might have been a big wakeup call for many in government, it demonstrated what this community has known and warned about for years: due to its widespread deployment, the exploitation of OSS vulnerabilities becomes more impactful," she added. In addition to holding software developers liable for selling vulnerable products, Easterly has also repeatedly called on vendors to support open source software security – either via money or dedicated developers to help maintain and secure the open source code that ends up in their commercial projects. ... Easterly repeated this call to action at this week's Summit, citing a Harvard study [PDF] that estimates open source software has generated more than $8 trillion dollars in value globally. "I do have one ask of all the software manufacturers," Easterly noted – though it ended up being technically two asks. "We need companies to be both responsible consumers of and sustainable contributors to the open software they use," she continued.


Anatomy of a BlackCat Attack Through the Eyes of Incident Response

“When responding to an incident, one of the areas that should be looked at is ‘What will the attacker understand and how will they react?’ – this is one of the areas that makes IR work for professionals,” Elboim explained. “On one hand, response activities should do the maximum to contain and remediate, but on the other, they should be done carefully so that the attacker will not know that activity is taking place – or at least not fully understand the type and scope of activities that are being done.” It was too late in this instance. “Cutting the Internet connection is a severe action that was unavoidable in this specific case, but there are many cases where we have taken a more careful approach and planned our activities so that the attacker isn’t informed of our activities, until we and the company we assist, are fully ready,” he added. The important point here, however, is that the victim’s senior management was brave enough to take that severe action. By now, the attackers had succeeded in exfiltrating data, but had not yet commenced encryption. That encryption was blocked. It did not prevent BlackCat from attempting to extort the victim over the stolen data, and for the next three weeks the attacker attempted to do so. 


The Hidden Cost of Using Managed Databases

As an engineer, nothing frustrates me more than being unable to solve an engineering problem. To an extent, databases can be seen as a black box. Most database users use them as a place to store and retrieve data. They don’t necessarily bother about what’s going on all the time. Still, when something malfunctions, the users are at the mercy of whatever tool the provider supplied to troubleshoot them. Providers generally run databases on top of some virtualization (Virtual Machines, Containers) and are sometimes even operated by an orchestrator (e.g., K8s). Also, they don’t necessarily provide complete access to the server where the database is running. The multiple layers of abstraction don’t make the situation any easier. While providers don’t offer full access to prevent users from "shooting themselves in the foot," an advanced user will likely need elevated permissions to understand what’s happening on different stacks and fix the underlying problem. This is the primary factor influencing my choice to self-host software, aiming for maximum control. This could involve hosting on my local data center or utilizing foundational elements like Virtual Machines and Object Storage, allowing me to create and manage my services.


How To Improve Your DevOps Workflow

When you think about DevOps, the first thing that comes to mind is collaboration. Because the whole methodology is based on this principle. We know the development and operations teams were originally separated, and there was a huge gap between their activities. DevOps came to transform this, advocating for close collaboration and constant communication between these departments throughout the complete software development life cycle. This increases the visibility and ownership of each team member while also building a space where every stage can be supervised and improved to deliver better results. ... The second thought we all have when asked about DevOps? Automation. This is also a main principle of the DevOps methodology, as it accelerates time-to-market, eases tasks that were usually manually completed, and quickly enhances the process. Software development teams can be more productive while building, testing, releasing code faster, and catching errors to fix them in record time. ... What organizations love about DevOps is its human approach. It prioritizes collaborators, their needs, and their potential. 


How to Successfully Implement AI into Your Business — Overcoming Challenges and Building a Future-Ready Team

Creating a future-ready team involves the strategic use of AI technologies to enhance human capabilities. Organizations need to focus on upskilling their employees as the AI landscape continues changing and ensure a workforce that is digitally literate to be able to interact with intelligent systems. It is critical to develop a culture of continuous learning and flexibility. In identifying the tasks that are best to be automated and powered by AI, teams can concentrate on complex problem-solving and creativity. The collaboration between human workers and AI algorithms increases productivity and innovation. In addition, promoting diversity and inclusivity in AI development helps to ensure a variety of opinions that will lead to ethical and unbiased solutions. ... In addition to technological integration, creating a future-ready team requires not only embracing the concept of lifelong learning but also an attitude toward change and inclusivity. As the business world continues to evolve in this ever-expanding technological environment, careful integration, continuous adaptation and fostering human skills are vital for long-term success and a balanced relationship between people and AI systems at work.


Data Management Predictions for 2024: Five Trends

In a data mesh context, business stakeholders will need to be able to define and create data products and govern the data based on their domain needs. IT will need to deploy the right infrastructure to enable business users to be more self-sufficient. In this data-centric era, it is not enough to merely package data attractively; organizations need to enhance entire end-user experience. Echoing the best practices of e-commerce giants, contemporary data platforms must offer features like personalized recommendations and popular product highlights, while also building confidence through user endorsements and data lineage visibility. ... GenAI will have a huge impact on data management and result in tools and technologies that are more business friendly. However, in an increasingly distributed data landscape, without the ability to assure access to high quality, trusted data, a GenAI-enabled data management infrastructure will be of little or no use. Organizations are encountering several additional challenges as they attempt to implement GenAI and large language models (LLMs), including issues with data quality, governance, ethical compliance, and cost management. 


Risk mitigation should address threat, vulnerability and consequence

To devise effective risk mitigation strategies, it’s critical to assess all three factors: threat, vulnerability, and consequence. If you focus only on threats and vulnerabilities without understanding the consequences, you might end up with risk assessment and mitigation gaps. CISOs must be able to identify and assess potential threats, including those from both external and internal sources. They must also comprehensively understand the organization's assets and vulnerabilities, including the IT infrastructure, data systems, and employee workforce. And they must be able to quantify the potential consequences of a cyberattack, including financial losses, reputational damage, and operational disruptions. ... Effective cyber-risk management needs to involve the entire organization, particularly as everyone has a role to play in identifying and managing the consequences of a cyber incident. CISOs must effectively communicate cyber risks and its implications to all of the employees at the company and give them the required training and resources they need to protect the organization. 


Researchers Develop Self-Replicating Malware “Morris II” Exploiting GenAI

GenAI attacks of this type have not yet been seen in the wild, and the researchers demonstrated this approach under lab conditions. But security researchers have been warning that state-sponsored hackers have been observed experimenting with the offensive capability of ChatGPT and similar tools since they became available. The self-replicating malware functions by identifying prompts that will generate output that serves as a further prompt, in a process that is not very different from how common buffer overflow attacks operate. The approach also exploits a feature of GenAI called “retrieval-augmented generation” (RAG), a method by which LLMs can be prompted to retrieve data that exists outside of their training model. Ultimately the researchers blamed poor design for opening the door to this approach, urging GenAI companies to go back to the drawing board and improve their architecture. GenAI email assistants of the sort that were attacked here are already a popular type of automation and productivity tool, performing features that range from automatically forwarding incoming emails to relevant parties to generating replies. 


Microsoft says Russian hackers stole source code after spying on its executives

It’s not clear what source code was accessed, but Microsoft warns that the Nobelium group, or “Midnight Blizzard,” as Microsoft refers to them, is now attempting to use “secrets of different types it has found” to try to further breach the software giant and potentially its customers. “Some of these secrets were shared between customers and Microsoft in email, and as we discover them in our exfiltrated email, we have been and are reaching out to these customers to assist them in taking mitigating measures,” says Microsoft. Nobelium initially accessed Microsoft’s systems through a password spray attack last year. This type of attack is a brute-force approach where hackers utilize a large dictionary of potential passwords against accounts. Microsoft had configured a non-production test tenant account without two-factor authentication enabled, allowing Nobelium to gain access. “Across Microsoft, we have increased our security investments, cross-enterprise coordination and mobilization, and have enhanced our ability to defend ourselves and secure and harden our environment against this advanced persistent threat,” says Microsoft.



Quote for the day:

"The best preparation for tomorrow is doing your best today." -- H. Jackson Brown, Jr.

Daily Tech Digest - March 08, 2024

What is the cost of not doing enterprise architecture?

Without an EA, an organisation may struggle to show how its IT projects and technology decisions align with its business goals, leading to initiatives that do not support the overall business strategy or deliver optimal value. A company favouring growth through acquisition should be buying systems and negotiating contracts that support onboarding of more users and more data/transactions without cost increasing significantly. The EA should allow for understanding which processes and technology would be impacted by the strategy, for modelling out the impact and also being used as part of the decision process. Equally, the architecture can consider strategic trends and be designed to support those, for example, bankrupt US retailer, Sears, was slow to adopt e-commerce, allowing competitors to capture the growing online shopping market. ... Your Enterprise Architecture provides a framework for making informed decisions about IT investments and strategies. Without the holistic view that EA offers, decision-makers may lack the full context for their decisions, leading to choices that are suboptimal or that fail to consider the interdependencies and long-term implications for the organisation.


Making Software Development Boring to Deliver Business Value

Boerman argued that software development should become boring. He made the distinction between boring software and exciting software: Boring software in that categorization resembles all software that has been built countless times, and will be so a billion times more. In this context, I am specifically thinking about back-end systems, though this rings true for front-end systems as well. Exciting software is all the projects that require creativity to build. Think about purpose-built algorithms, automations, AI integrations, and the like. Making software development boring again is about laying a prime focus on delivering business value, and making the delivery of these aspects predictable and repeatable, Boerman argued. This requires moving infrastructure out of the way in such a way that it is still there, but does not burden the day-to-day development process: While infrastructure takes most of the development time, it technically delivers the least amount of business value, which can be found in the data and the operations executed against it. New exciting experiments may be fast-moving and unstable, while the boring core is meant to be and remain of high quality such that it can withstand outside disruptions, Boerman concluded.


New TDWI Assessment Examines the State of Data Quality Maturity Today

“With data becoming such a critical part of a business’s ability to compete, it’s no wonder there’s a growing emphasis on data quality,” Halper began. “Organizations need better and faster insights in order to succeed, and for that they need better, more enriched data sets for advanced analytics -- such as predictive analytics and machine learning.” She explained that to do this, organizations are not only increasing the amount of traditional, structured data they’re collecting, they’re also looking for newer data types, such as unstructured text data or semistructured data from websites. Taken together, these various types of data can offer significantly more opportunities for insights, she added. As an example, Halper mentioned the idea of an organization using notes from its call center -- typically unstructured or semistructured text data -- to analyze customer satisfaction, either with a particular product or with the company as a whole. This information can then be fed back into an analytics or machine learning routine and reveal patterns or other insights meaningful to the company. “Regardless of the type of data or its end use,” she said, “the original data must be high quality. It must be accurate, complete, timely, trustworthy, and fit for purpose.”


The Five Biggest Challenges with Large-Scale Cloud Migrations

Several issues can arise when attempting to migrate legacy systems to the cloud. The system may not be optimized for cloud performance and scalability, so it is important to develop and implement solutions that boost the system’s speed and capacity to get the most from the cloud migration. Other issues common with legacy system integration include data security, data integrity, and cost management. The latter is often a particular concern because companies may also be required to pay for training and maintenance in addition to the cost of migration. ... The risks of migrating data to the cloud include data security, data corruption, and excessive downtime, which can cost money and negatively impact performance. To optimize migration success and minimize downtime, it is vital for companies to understand the amount of data involved and the bandwidth necessary to complete the transfer with minimal work disruption. ... Due to poor infrastructure and configuration, many companies cannot take advantage of the benefits of cloud computing. Often, companies fail to maximize the move from fixed infrastructure to scalable and dynamic cloud resources.


Getting the BELT: Empowering Executive Leadership in Data Governance

The active engagement of the ELT in the data governance process is critical not only for setting a strategic direction, but also for catalyzing a shift in organizational mindset. By championing the principles of NIDG, the ELT paves the way for a governance model that is both effective and sustainable. This leadership commitment helps in breaking down silos, promoting cross-departmental collaboration, and establishing a shared vision that recognizes data as a pivotal asset. Through their actions and decisions, executive leaders serve as role models, demonstrating the value of data governance and encouraging a culture of continuous improvement. Their involvement ensures that data governance initiatives are aligned with business strategies, driving the organization toward achieving its goals while maintaining data integrity and compliance. ... The journey towards effective data governance begins with buy-in, not just from the ELT, but across the entire organization. Achieving this requires the ELT to understand the strategic importance of data governance and to communicate this value convincingly. 


Going passwordless with passkeys in Windows and .NET

Passkeys managed by Windows Hello are “device-bound passkeys” tied to your PC. Windows can support other passkeys, for example passkeys stored on a nearby smartphone or on a modern security token. There’s even the option of using third parties to provide and manage passkeys, for example via a banking app or a web service. Windows passkey support allows you to save keys on third-party devices. You can use a QR code to transfer the passkey data to the device, or if it’s a linked Android smartphone, you can transfer it over a local wireless connection. In both cases the devices need a biometric identity sensor and secure storage. As an alternative, Windows will work with FIDO2-ready security keys, storing passkeys on a YubiKey or similar device. A Windows Security dialog helps you choose where to save your keys and how. If you’re saving the key on Windows, you’ll be asked to verify your identity using Windows Hello before the device is saved locally. If you’re using Windows 11 22H2 or later, you can manage passkeys through Windows settings.


Generative AI on its own will not improve the customer experience

Businesses around the world hope that, beyond the hype of generative AI, there lies a near-term path to improving business efficiency and in parallel a longer-term ability to grow revenue. There is one, not insignificant, consideration to weigh before the true savings can be measured. In 2024, as in 2023, generative AI and ChatGPT both trail "Customer Service / Telephone number" as search terms on Google in most countries. Most of those searches involve a quest by a customer to reach a human being. There is great frustration because most businesses are working hard to make it difficult to reach a person. This gap between the corporate commitment to removing the human connection in customer service and the customer's desire for a human connection almost always points to a bad business process. The business must examine why the customer doesn't use the self-service channel. This discovery process is a precursor to deeper self-service powered by generative AI. Our first recommendation is to step back and ensure the customer service process you want to supercharge with generative AI satisfies customers. 


How continuous SDL can help you build more secure software

Beyond making the SDL automated, data-driven, and transparent, Microsoft is also focused on modernizing the practices that the SDL is built on to keep up with changing technologies and ensure our products and services are secure by design and by default. In 2023, six new requirements were introduced, six were retired, and 19 received major updates. We’re investing in new threat modeling capabilities, accelerating the adoption of new memory-safe languages, and focusing on securing open-source software and the software supply chain. We’re committed to providing continued assurance to open-source software security, measuring and monitoring open-source code repositories to ensure vulnerabilities are identified and remediated on a continuous basis. Microsoft is also dedicated to bringing responsible AI into the SDL, incorporating AI into our security tooling to help developers identify and fix vulnerabilities faster. We’ve built new capabilities like the AI Red Team to find and fix vulnerabilities in AI systems. By introducing modernized practices into the SDL, we can stay ahead of attacker innovation, designing faster defenses that protect against new classes of vulnerabilities.


Rethinking SDLC security and governance: A new paradigm with identity at the forefront

Poorly governed identities have become a gateway for substantial incidents. High-profile breaches at companies like LastPass and Okta have illuminated the attackers' method: exploiting the identity attack vector to orchestrate some of the most notable breaches, using compromised accounts to potentially alter source code and extract valuable information. These events underscore a clear and present trend of identity theft through phishing or ransomware attacks, which then pave the way for attackers to infiltrate the software development lifecycle (SDLC), leading to the insertion of malicious code and the theft of data. Despite the clear risks, organizations continue to fumble in securing and managing these identities, making it the riskiest yet most overlooked attack vector facing SDLC security and governance today. As we pivot to address this critical oversight, it's imperative to understand the role of identity within the SDLC. The “Inverted Pyramid" analogy is a useful conceptual framework that captures the essence of the old and new paradigms and how reorienting our approach can better protect against these insidious threats.


Analyzing the CEO–CMO relationship and its effect on growth

It’s estimated that only 10 percent of Fortune 250 CEOs have marketing experience. There’s also a dramatic acceleration of digital technology in the world of marketing. We’re no longer judging marketing by television commercials. There’s a whole slew of different components to think through. And the data piece that you hinted at is that these customers’ signals are now everywhere. It’s incumbent upon us as marketers to interpret them and feed them back to our organizations in such a way that we don’t talk about data but we talk about insights and are able to connect the dots. ... As we come up with a means to measure marketing, the CEO or CFO needs to learn the measurement systems in place to understand what it means when I cut budget, what it means when I invest in it, and how we tie those activities to outcomes. That robust measurement system can help you understand your brand, how your customers perceive your brand, and what level of fidelity they give you credit for. That’s where the brand scores are really helpful. But you also need an econometric model to connect how the money you’re spending on different channels such as video, content, and search—all working in tandem—helps create the results you want.



Quote for the day:

"Success is the sum of small efforts, repeated day-in and day-out." -- Robert Collier

Daily Tech Digest - March 07, 2024

3 Key Metrics to Measure Developer Productivity

The team dimension considers business outcomes in a wider organizational context. While software development teams must work efficiently together, they must also work with teams across other business units. Often, non-technical factors, such as peer support, working environment, psychological safety and job enthusiasm play a significant role in boosting productivity. Another framework is SPACE, which is an acronym for satisfaction, performance, activity, communication and efficiency. SPACE was developed to capture some of the more nuanced and human-centered dimensions of productivity. SPACE metrics, in combination with DORA metrics, can fill in the productivity measurement gaps by correlating productivity metrics to business outcomes. McKinsey found that combining DORA and SPACE metrics with “opportunity-focused” metrics can produce a well-rounded view of developer productivity. That, in turn, can lead to positive outcomes, as McKinsey reports: 20% to 30% reduction in customer-reported product defects, 20% improvement in employee experience scores and 60% improvement in customer satisfaction ratings.


Metadata Governance: Crucial to Managing IoT

Governance of metadata requires formalization and agreement among stakeholders, based on existing Data Governance processes and activities. Through this program, business stakeholders engage in conversations to agree on what the data is and its context, generating standards around organizational metadata. The organization sees the results in a Business Glossary or data catalog. In addition to Data Governance tools, IT tools significantly contribute to metadata generation and usage, tracking updates, and collecting data. These applications, often equipped with machine learning capabilities, automate the gathering, processing, and delivery of metadata to identify patterns within the data without the need for manual intervention. ... The need for metadata governance services will emerge through establishing and maintaining this metadata management program. By setting up and running these services, an organization can better utilize Data Governance capabilities to collect, select, and edit metadata. Developing these processes requires time and effort, as metadata governance needs to adapt to the organization’s changing needs. 


CISOs Tackle Compliance With Cyber Guidelines

Operationally, CISOs will need to become increasingly involved with the organization as a whole -- not just the IT and security teams -- to understand the company’s overall security dynamics. “This is a much more resource-intensive process, but necessary until companies find sustainable footing in the new regulatory landscape,” Tom Kennedy, vice president of Axonius Federal Systems, explains via email. He points to the SEC disclosure mandate, which requires registrants to disclose “material cybersecurity incidents”, as a great example of how private companies are struggling to comply. From his perspective, the root problem is a lack of clarity within the mandate of what constitutes a “material” breach, and where the minimum bar should be set when it comes to a company’s security posture. “As a result, we’ve seen a large variety in companies’ recent cyber incident disclosures, including both the frequency, level of detail, and even timing,” he says. ... “The first step in fortifying your security posture is knowing what your full attack surface is -- you cannot protect what you don’t know about,” Kennedy says. “CISOs and their teams must be aware of all systems in their network -- both benign and active -- understand how they work together, what vulnerabilities they may have.”


AISecOps: Expanding DevSecOps to Secure AI and ML

AISecOps, the application of DevSecOps principles to AI/ML and generative AI, means integrating security into the life cycle of these models—from design and training to deployment and monitoring. Continuous security practices, such as real-time vulnerability scanning and automated threat detection, protection measures for the data and model repositories, are essential to safeguarding against evolving threats. One of the core tenets of DevSecOps is fostering a culture of collaboration between development, security and operations teams. This multidisciplinary approach is even more critical in the context of AISecOps, where developers, data scientists, AI researchers and cybersecurity professionals must work together to identify and mitigate risks. Collaboration and open communication channels can accelerate the identification of vulnerabilities and the implementation of fixes. Data is the lifeblood of AI and ML models. Ensuring the integrity and confidentiality of the data used for training and inference is paramount. ... Embedding security considerations from the outset is a principle that translates directly from DevSecOps to AI and ML development.


Translating Generative AI investments into tangible outcomes

Integration of Generative AI presents exciting opportunities for businesses, but it also comes with its fair share of risks. One significant concern revolves around data privacy and security. Generative AI systems often require access to vast amounts of sensitive data, raising concerns about potential breaches and unauthorised access. Moreover, there’s the challenge of ensuring the reliability and accuracy of generated outputs, as errors or inaccuracies could lead to costly consequences or damage to the brand’s reputation. Lastly, there’s the risk of over-reliance on AI-generated content, potentially diminishing human creativity and innovation within the organisation. Navigating these risks requires careful planning, robust security measures, and ongoing monitoring to ensure the responsible and effective integration of Generative AI into business operations. Consider a healthcare organisation that implements Generative AI for medical diagnosis assistance. In this scenario, the AI system requires access to sensitive patient data, including medical records, diagnostic tests, and personal information. 


Beyond the table stakes: CISO Ian Schneller on cybersecurity’s evolving role

Schneller encourages his audience to consider the gap between the demand for cyber talent and the supply of it. “Read any kind of public press,” he says, “and though the numbers may differ a bit, they’re consistent in that there are many tens, if not hundreds of thousands of open cyber positions.” In February of last year, according to Statista, about 750,000 cyber positions were open in the US alone. According to the World Economic Forum, the global number is about 3.5 million, and according to Cybercrime magazine, the disparity is expected to persist through at least 2025. As Schneller points out, this means companies will struggle to attract cyber talent, and they will have to seek it in non-traditional places. There are many tactics for attracting security talent—aligning pay to what matters, ensuring that you have clear paths for advancing careers—but all this sums to a broader point that Schneller emphasizes: branding. Your organization must convey that it takes cybersecurity seriously, that it will provide cybersecurity talent a culture in which they can solve challenging problems, advance their careers, and earn respect, contributing to the success of the business. 


Quantum Computing Demystified – Part 2

Quantum computing’s potential to invalidate current cryptographic standards necessitates a paradigm shift towards the development of quantum-resistant encryption methods, safeguarding digital infrastructures against future quantum threats. This scenario underscores the urgency in fortifying cybersecurity frameworks to withstand the capabilities of quantum algorithms. For decision-makers and policymakers, the quantum computing era presents a dual-edged sword of strategic opportunities and challenges. The imperative to embrace this nascent technology is twofold, requiring substantial investment in research, development, and education to cultivate a quantum-literate workforce. ... Bridging the quantum expertise gap through education and training is vital for fostering a skilled workforce capable of driving quantum innovation forward. Moreover, ethical and regulatory frameworks must evolve in tandem with quantum advancements to ensure equitable access and prevent misuse, thereby safeguarding societal and economic interests.


The Comprehensive Evolution Of DevSecOps In Modern Software Ecosystems

The potential for enhanced efficiency and accuracy in identifying and addressing security vulnerabilities is enormous, even though this improvement is not without its challenges, which include the possibility of algorithmic errors and shifts in job duties. Using tools that are powered by artificial intelligence, teams can prevent security breaches, perform code analysis more efficiently and automate mundane operations. This frees up human resources to be used for tackling more complicated and innovative problems. ... When using traditional software development approaches, security checks were frequently carried out at a later stage in the development cycle, which resulted in patches that were both expensive and time-consuming. The DevSecOps methodology takes a shift-left strategy, which integrates security at the beginning of the development process. This brings security to the forefront of the process. By incorporating security into the design and development phases from the beginning, this proactive technique not only decreases the likelihood of vulnerabilities being discovered after they have already been discovered, but it also speeds up the development process.


How Generative AI and Data Management Can Augment Human Interaction with Data

In contrast with ETL processes, logical data management solutions enable real-time connections to disparate data sources without physically replicating any data. This is accomplished with data virtualization, a data integration method that establishes a virtual abstraction layer between data consumers and data sources. With this architecture, logical data management solutions enable organizations to implement flexible data fabrics above their disparate data sources, regardless of whether they are legacy or modern; structured, semistructured, or unstructured; cloud or on-premises; local or overseas; or static or streaming. The result is a data fabric that seamlessly unifies these data sources so data consumers can use the data without knowing the details about where and how it is stored. In the case of generative AI, where an LLM is the “consumer,” the LLM can simply leverage the available data, regardless of its storage characteristics, so the model can do its job. Another advantage of a data fabric is that because the data is universally accessible, it can also be universally governed and secured. 


Developers don’t need performance reviews

Software development is commonly called a “team sport.” Assessing individual contributions in isolation can breed unhealthy competition, undermine teamwork, and incentivize behavior that, while technically hitting the mark, can be detrimental to good coding and good software. The pressure of performance evaluations can deter developers from innovative pursuits, pushing them towards safer paths. And developers shouldn’t be steering towards safer paths. The development environment is rapidly changing, and developers should be encouraged to experiment, try new things, and seek out innovative solutions. Worrying about hitting specific metrics squelches the impulse to try something new. Finally, a one-size-fits-all approach to performance reviews doesn’t take into account the unique nature of software development. Using the same system to evaluate developers and members of the marketing team won’t capture the unique skills found among developers. Some software developers thrive fixing bugs. Others love writing greenfield code. Some are fast but less accurate. Others are slower but highly accurate.



Quote for the day:

''Perseverance is failing nineteen times and succeeding the twentieth.'' -- Julie Andrews

Daily Tech Digest - March 06, 2024

From AML to cybersecurity: The evolving challenges of bank compliance

For banks, it is a strategic necessity to protect their financial health and reputational standing. The ability to effectively identify, assess, and mitigate these threats is critical in safeguarding against operational disruptions and legal repercussions. In this high-stakes environment, the adoption of advanced solutions, particularly automation technology, is becoming increasingly important. These tools are not merely operational aids but strategic assets that streamline compliance processes and facilitate adherence to the constantly evolving regulatory landscape. ... KYC compliance focuses on verifying client identities and assessing their financial behavior, while AML efforts are aimed at preventing money laundering through transaction monitoring and analysis. These measures serve multiple roles in banking risk and compliance, including reducing operational risk by preventing illegal activities, mitigating legal and regulatory risks to avoid fines and reputational damage, and safeguarding the financial system and society from financial crimes.


How Fintech Is Disrupting Traditional Banks in 2024

Broadly speaking, incumbent banks have adapted well to the past decade’s wave of fintech innovation, while startups have also managed to carve out meaningful market share. Both were able to drive and adapt to changing technology in the consumer banking space. Neobanks like Chime, SoFi and Varo found success providing “new front doors” for consumers — between them, the three companies’ apps were downloaded over 8 million times in 2023 alone. Meanwhile, incumbents were able to quickly adopt neobanks’ more attractive features like zero overdraft fees and continue to see substantial user base growth. Mobile app download data suggests incumbents and disruptors are both winning the race to be consumers’ primary financial relationship. On the business banking side, startup neobanks like Mercury and Brex benefited from early 2023 bank instability — receiving an estimated 29% of Silicon Valley Bank (SVB) deposit outflows. ... By facilitating “hands-off” investment and trading, the rise of roboadvisors opened the door to millions of consumers who were otherwise unreachable to wealth and asset management companies.


Suptech on the Rise As Consumer Protection & Prudential Banking Prioritised

A cultural shift is taking place alongside the digital transformation, with financial authorities creating new roles to drive suptech adoption, training staff, and collaborating across the supervisory ecosystem. Surveyed financial authorities report the biggest impact of their suptech implementation is the speed with which they are able to respond to emerging risks and take supervisory action (76 per cent). They also cite more efficient information flows between consumers and supervisors (65 per cent). This enables better and more transparent data analysis and timely response to potential issues. Suptech initiatives also positively impact consumer outcomes (52 per cent). Consequently, there has been improved protection and increased confidence in financial markets. ... “The diverse perspectives from the global supervisory community reflected in State of SupTech Report serve as the guiding force in shaping our research, training programs, and digital tools. This year’s report dives particularly deeply into the strategies and structures that dictate data flows within financial authorities, which necessarily inform how suptech solutions can be tailored and harmonised with existing supervisory processes.


Cybersecurity in the Cloud: Integrating Continuous Security Testing Within DevSecOps

To successfully integrate Continuous Security Testing (CST), you must prepare your cloud environment first. Use a manual tool like OWASP or an automated security testing process to perform a thorough security audit and ensure your cloud environments are well-protected to lay a robust groundwork for CST. Before diving into integrating Continuous Security Testing (CST) within your cloud infrastructure, it's crucial to lay a solid foundation by meticulously preparing your cloud environment. This preparatory step involves conducting a comprehensive security audit to identify vulnerabilities and ensure your cloud architecture is fortified against threats. Leveraging tools such as the Open Web Application Security Project (OWASP) for manual evaluations or employing sophisticated automated security testing processes can significantly aid this endeavor. Conduct a detailed inventory of all assets and resources within your cloud architecture to assess your cloud environment's security posture. This includes everything from data storage solutions and archives to virtual machines and network configurations.


How Leaders Can Instill Hope In Their Teams

“When something is meaningful, it helps us to answer the question ‘Why am I here?’ Amid the cost-of-living crisis and general world instability, it is important that employees are able to foster meaning in their work, as it is meaning that also brings hope to the day to day.” ... “The rising tide of conflict, complaints and concerns that we are seeing in our workplaces is contributing to high levels of anxiety and depression,” says David Liddle, CEO and chief consultant at mediation provider The TCM Group and author of Managing Conflict. “When people are spending their working days in toxic cultures, where incivility, bullying, harassment and discrimination are rife, it has a huge impact on both their physical and mental health.” ... Servantie argues that to tackle employee disengagement, leaders should “lead and inspire by example, showing that belief in change is possible, even in difficult times”. She says: “They should also remain steadfast in purpose and prioritize the growth of individuals over the growth of companies. Finally, communication and transparency in leadership are fundamental.


How to create an efficient governance control program

Your journey toward robust governance control begins with establishing a solid foundation. A house built on a shaky foundation will collapse over time. The framework of foundational practices and addressing cultural shift to security as a business concept, not a technology problem, is therefore key. It is an incremental development of proven practices to then start gauging your overall maturity and path to continuous improvement. You will need to measure and plan for today and look ahead to where you want to be. To get this view, you need to stand on solid ground, and that starts off with your governance program. While navigating this step, it’s important for you to understand your regulatory environment and build capabilities to support the compliance of your internal program to that of your sector. Bringing in stakeholder and business context will align practices to support risk management and also compliance. The controls in place will have the benefit of being informed of the requirements for control as well as a capability that will enforce a by-product of compliance. 


4 tabletop exercises every security team should run

Third-party risk management (TPRM) exercise participants should include representatives from key downstream business partners — partners who supply goods and services to the enterprise — as well as your cyber insurance provider, law enforcement, and all key stakeholders, often including the board of directors and senior management. While supply-chain attacks are ubiquitous, often they are misidentified because the actual attack might be initially identified as ransomware, an advanced persistent threat, or some other cyber threat. Often it requires the forensics team post-breach investigation to identify that the attack came through a trusted third party. ... Insider threats come in two primary types: malicious insiders who deliberately compromise corporate assets for personal, financial, political or some other gain, and those who create a security vulnerability either accidentally or simply due to lack of knowledge but without malice. In the former case, a deliberate crime against the company is committed. The latter case might involve either a user error or perhaps a user taking an action that seems reasonable to them to perform their jobs but could create a vulnerability. 


Digital Twins Are the Next Wave of Innovation, and Australia Needs to Move Quickly

In fact, in many ways, the journey of the digital twin seems to be parallel to the story of both digital transformation and AI before it — a lack of understanding of what digital twins are leads to excitement and investment, but without the right understanding, the risk of failure is higher. Gavin Cotterill, founder and managing director of Australian digital twin consultancy GC3 Digital, said in an interview with IoT Hub: “A lot of people think digital twin is just focused on a flashy 3D model, but effectively it is a master data management strategy.” “You need good quality data to support that decision making and the quality of our data, generally, is pretty poor. We have a lot of data, but we don’t know what to do with it,” Cotterill said. “Data governance, data strategy is the unsexy part of digital twin — it’s the engine room, it’s the fuel.” This means IT leaders face competing challenges with regard to digital twins. On the one hand, the appetite is there, particularly among those executives and boards to be aware of the bleeding edge of technology. On the other hand, Australian organisations, as a whole, are not ready to tackle the digital twin opportunity.


Longer coherence: How the quantum computing industry is maturing

On-premise quantum computers are currently rarities largely reserved for national computing labs and academic institutions. Most quantum processing unit (QPU) providers offer access to their systems via their own web portals and through public cloud providers. But today’s systems are rarely expected (or contracted) to run with the five-9s resiliency and redundancy we might expect from tried and tested silicon hardware. “Right now, quantum systems are more like supercomputers and they're managed with a queue; they're probably not online 24 hours, users enter jobs into a queue and get answers back as the queue executes,” says Atom’s Hays. “We are approaching how we get closer to 24/7 and how we build in redundancy and failover so that if one system has come offline for maintenance, there's another one available at all times. How do we build a system architecturally and engineering-wise, where we can do hot swaps or upgrades or changes with minimal downtime as possible?” Other providers are going through similar teething phases of how to make their systems – which are currently sensitive, temperamental, and complicated – enterprise-ready for the data centers of the world.


Why Blockchain Payments Are Misunderstood

Comparing a highly regulated system to one that sits in a gray area can be misleading. Many crypto-based remittance applications do little or no know-your-customer and anti-money laundering checks, which are costly and difficult to run. This is a cost advantage that is unlikely to last. Low levels of competition are another big driver in high payment costs. This is true both for business-to-business and consumer-to-consumer payments. ... On the business side, blockchains can drive costs down and build sustainable advantage through differentiated technology. While it is true that main-net transaction costs in Ethereum are higher, the addition of smart contract functionality changes the equation entirely. Enterprises issue payments to each other usually as part of a complex agreement. This usually means not only verifying receipt of goods or services, but also compliance with the agreed upon terms. ... Right now, the kind of fully digital end-to-end systems that smart contracts enable are the province of the world’s biggest companies. With scale and deep pockets, big companies have built integrated systems without blockchains. 



Quote for the day:

"If you don't understand that you work for your mislabeled 'subordinates,' then you know nothing of leadership. You know only tyranny." -- Dee Hock