Daily Tech Digest - August 13, 2024

The Tug of War Between Biometrics and Privacy

The strengths of biometric identification can combat fraud. Your fingerprint proves you are you before you conduct a transaction on your mobile banking app, for example. At airports, biometrics identification is implemented as a matter of public safety. Fingerprint biometrics are standard in background checks. Within an enterprise, biometric systems may be used to prevent insider threats, verifying an employee’s identity before they conduct a transaction. Among the myriad use cases for biometrics, the argument for this technology is its convenience and its strengths over traditional measures, such as passwords. Biometric identifiers are unique to the individual and difficult to alter or fake. ... In many scenarios, consent is clearcut. An enterprise has an upfront policy, and users must give their explicit permission to have their biometrics collected. Think of a banking app; you have to click through a series of prompts before you can start using your thumbprint to log into your account. In other situations, consent is not so easily addressed. In an airport, for example, it is possible to opt out of facial recognition, but that might be surprising to many. 


Remember quantum computing in the cloud?

Quantum computing, while promising, is still mainly in the realm of future potential. The industry is making strides towards more advanced qubits and increased stability. However, the practical utility of these advancements remains over the horizon for many organizations. This timeline, coupled with the steep learning curve and investment required, has positioned quantum computing as a slower-evolving technology compared to AI. Moreover, the current quantum offerings, often accessed via cloud platforms, are still primarily experimental. They require specialized knowledge to leverage effectively, whereas GPUs integrated into cloud services can be readily used to scale existing AI operations with relatively lower barriers to entry. Why are generative AI and GPUs so dominant? The answer lies in immediate applicability and results. Businesses today face pressures to innovate faster than ever. Generative AI not only aids in creating innovative solutions but also provides a competitive edge in real-time decision-making processes. It is a tool ready to be wielded, with clear ROI and application pathways that quantum computing has yet to establish fully.


Welcome to the AI revolution: From horsepower to manpower to machine-power

Until very recently, technology was first and foremost a tool. It was something humans built and then used to do a job -- and to do it better, faster, and easier than we could without it. But still, we used technology. What's new with artificial intelligence (AI) is that we are not creating new tools to help us do a job. We are creating a new workforce to do the job for us. This trend is not absolute of course and we can always point to older technologies that may have done part of our job for us (factory automation began at least 200 years ago). However, we are now creating a cheaper, faster, better, scalable workforce, not a cheaper, faster, better, scalable toolset. This new workforce is not going to replace us all any time soon. There are two main reasons for this fact. The first is that the hype of AI far exceeds its current capabilities, except in some narrow, rules-based scenarios. Generative AI in particular appears almost magical in its ability to render text, images and even video. Yet its inability to understand any of its output, along with the volume of data and the power needed to train its models, surely limits it from replacing human workers.


The Crucial Role of Firewall Rule Histories

In the security industry, there are unfortunately many opportunities for organizational learning and improvement after a breach or an attack, regardless of whether they were successful or stopped right away. Beyond the containment and security enhancement steps, firewall rule histories are also necessary to create a comprehensive post-mortem analysis of the breach’s scope and root cause. One of the greatest takeaways from a firewall rule analysis is the insight into a network segmentation weakness or access control mechanism that needs to be addressed to prevent similar attacks from being successful in the future. Understanding the lateral movement of attackers within the network helps in assessing the full extent of compromised systems or data. Rule histories can show security teams whether an attack was conducted quickly, as soon as an attacker gained access; or if it was a slow, methodical process where adjustments were made over time to secure maximum impact when finally set into motion. Security teams can use firewall histories to identify recurring patterns, trends, or systemic vulnerabilities beyond those that lie on the surface. 


CISOs face uncharted territory in preparing for AI security risks

Despite the enormous intellectual, technical, and government resources devoted to creating AI risk models, practical advice for CISOs on how to best manage AI risks is currently in short supply. Although CISOs and security teams have come to understand the supply chain risks of traditional software and code, particularly open-source software, managing AI risks is a whole new ballgame. “The difference is that AI and the use of AI models are new.” Alon Schindel, VP of data and threat research at Wiz tells CSO. “We have never seen technology developed so fast like these models,” he says. “It’s not like the machine learning models of the past. There are some great opportunities here, but the work is not done yet. We still haven’t worked out how to ensure this feature will be the most effective for security teams.” James Robinson, CISO at Netskope, tells CSO, “It’s still very early days. It’s rapidly developing. The research reports are coming out amazingly fast, and there’s a lot of excitement and investment. The landscape continues to evolve. That’s one thing CISOs must be prepared for.” “Newer architecture and newer models are advancing by the second nowadays,” Omar Santos


Powering Industry 4.0 with the intelligent Edge

Successful Edge deployments drive businesses to treat it as an integral part of their business strategy. Meeting the data demands of the latest AI-powered innovations isn't a one-person job. What’s clear is that AI is driving the demand for Edge technologies. To meet this demand, organizations will need to collaborate internally between IT and business teams, and externally with managed service providers (MSPs) who can help navigate legacy systems and protocols. Leveraging the knowledge of MSPs will be integral to finding the most efficient and effective ways for an enterprise to deploy and leverage Edge computing. By embracing the intelligent Edge, businesses can unlock a myriad of benefits from operational efficiency to real-time Actionable AI – the perfect foundation for agile and adaptable operations. As more enterprises look to adopt the latest Edge technologies, this foundation will be critical to ensuring seamless data processing, scalability, and the ability to adapt to evolving business goals, but this demands orchestration across IT and business functions. Keep in mind that going in alone on the journey can prevent enterprises from realizing the full potential of Edge.


The Changing C-Suite: Chief AI Officer In, Chief Diversity Officer Out

Foss explained that the shift toward integrating diversity responsibilities into broader leadership roles is partly due to the increasing expectation to do more with less. "As organizations understand having diverse teams lead to better outcomes and faster value creation, there's a growing consensus that all leaders should be involved in driving these initiatives," Foss said. From the perspective of Caroline Carruthers, CEO of global data consultancy Carruthers and Jackson, the roles that achieve longevity in the C-suite are those that are based around a corporate asset. "That could be anything from finance to people to data to operations to security," she said. ... Subramanian predicted that either the role of the chief diversity officer will evolve to encompass AI or a new role of chief AI officer will have broader oversight across AI and data. "It is likely that chief AI officers will develop close collaboration with security, IT, legal, and line of business leaders," Subramanian said. She added that she believes the roles of chief diversity officer and chief AI officer will merge, as AI needs data and the biggest opportunities with AI have to do with data.


From data to insight to action: The very human challenges of AI transformation

The first step in AI transformation is collecting data, which today is the easiest step. So far, Grantcharov has placed the platform in around 20 operating rooms across the U.S. Through a variety of sensors, the OR black box captured up to 1 million data points per day per site. These included audio-visual data of surgical procedures, electronic health records and input from surgical devices. The data also included biometric readings from the surgical team, such as their heart rate variability as a reflection of stress levels, and brain activity measured by wireless EEGs. ... But here’s where it’s also important to understand humans. AI can correlate OR accidents with certain events, but without a working hypothesis, it’s all just noise. For example, Grantcharov’s team hypothesized that stress could affect a surgeon’s performance by impacting their cognitive processing and decision making. So they designed the experiment to collect physiological data from the surgeons, and AI was able to correlate these data with OR accidents. The finding: Stressed-out surgeons had a 66% higher chance of making an error. ... Finally, systems are procedures or principles put into place that make the desired behavior the easiest to do.


UN Approves Cybercrime Treaty Despite Major Tech, Privacy Concerns

The treaty, passed on Aug. 8, will require a wide variety of companies — financial services, travel, technology, and telecommunications firms — not only to support domestic law enforcement, but to help with requests from treaty signatories, says Nick Ashton-Hart, head of the Cybersecurity Tech Accord delegation to the negotiations. "Unfortunately the draft adopted doesn't resolve any of the issues we raised, or that any other part of the private sector or civil society raised," he says. "Security researchers and penetration testers — as well as investigative journalists, whistleblowers, and others — are at risk of criminal prosecution because of the poor and vague wording in the criminalization chapter." ... "Because the convention allows all cooperation to take place in perpetual secrecy and has no oversight mechanism, the convention invites abusive requests for cooperation that can be used to undermine secure systems relied upon by billions of people and millions of enterprises each day," he says. "Without [cooperation] from the US and EU, there's little value in anyone else joining this..."


What Is Data Trust and Why Does It Matter?

Understanding the importance of data trust is the first step in implementing a program to build trust between the producers and consumers of the data products your company relies on increasingly for its success. Once you know the benefits and risks of making data trustworthy, the hard work of determining the best way to realize, measure, and maintain data trust begins. Among the goals of a data trust program are promoting the company’s privacy, security, and ethics policies, including consent management and assessing the risks of sharing data with third parties. The most crucial aspect of a data trust program is convincing knowledge workers that they can trust AI-based tools. A study released recently by Salesforce found that more than half of the global knowledge workers it surveyed don’t trust the data that’s used to train AI systems, and 56% find it difficult to extract the information they need from AI systems. Of the workers who don’t trust AI training data, three out of four state that the systems don’t have the information they need to be of use.



Quote for the day:

“Don’t let the fear of losing be greater than the excitement of winning.” -- Robert Kiyosaki

Daily Tech Digest - August 12, 2024

In three or four years, ‘we won’t even talk about AI’

In general, there’s a very positive view of AI in tech. In a lot of other industries, there’s some uncertainty, some trepidation, some curiosity. But part of our pulse survey said about three out of four tech workers are using AI on a daily basis. So, the adoption in this portfolio of companies is higher than most, and I’d also said most employers and workers have a very good idea that AI is going to improve their business and their work. ... “I view AI skills as adjacent, additive skills for most people — aside from really hardcore data scientists and AI engineers. This is how most people will work in the new world. Generally, it depends. Some organizations have built whole, distinct AI organizations. Others have built embedded AI domains in all of their job functions. It really depends. There’s a lot of discussion around whether companies should have a chief AI officer. I’m not sure that’s necessary. I think a lot of those functions are already in place. You do need someone in your organization who has a holistic view of the positive sides of this and the risks associated with this.”


The AI Balancing Act: Innovating While Safeguarding Consumer Privacy

There are two sides to every coin. While AI can further compliance efforts, it can also create new privacy and security challenges. This is particularly true today, amid an ongoing global effort to strengthen data privacy laws. 71% of countries have data privacy legislation, and in recent years, this has evolved to encapsulate AI. In the EU, for instance, approval has been secured from the European Parliament around a specific AI regulatory framework. This framework imposes specific obligations on providers of high-risk AI systems and could ban certain AI-powered applications. The fact is, AI-powered technology is immensely powerful. But, it comes with complex challenges to data privacy compliance. A primary concern here relates to purpose limitation, specifically the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. As AI systems evolve, they may find new ways to utilise data, potentially extending beyond the scope of original disclosure and consent agreement. As such, maintaining transparency in AI operations to ensure accurate and appropriate data use disclosures is critical.


Is biometric authentication still effective?

With the rapid advancement and accessibility of technologies, the efficacy and security of biometric authentication methods are under threat. Fraudsters are using spoofing techniques to replicate or falsify biometric data, such as creating synthetic fingerprints or 3D facial models, to fool sensors, mimic legitimate biometric traits and gain unauthorized access to secured services. ... Unlike traditional biometric authentication, which relies on static physical attributes, behavioral biometrics verify user identity based on unique interaction patterns, such as typing rhythm, mouse movements and touchscreen interactions. This shift is essential because behavioral biometrics offer a more dynamic and adaptive layer of security, making it significantly harder for fraudsters to replicate or mask. ... With data scattered across different systems, it’s challenging to correlate information, connect the dots and identify overarching patterns of bad behavior. A decentralized approach causes businesses to overlook crucial fraud indicators and struggle to respond effectively to emerging threats due to the lack of visibility and coordination among disparate fraud prevention tools.


Practical strategies for mitigating API security risks

Identity and access management is crucial for a complete API security strategy. IAM facilitates efficient user management from creation to deactivation and ensures that only authorized individuals access APIs. IAM enables granular access control, granting permissions based on specific attributes and resources rather than just predefined roles. Integration with security information and event management (SIEM) systems enhances security by providing centralized visibility and enabling better threat detection and response. AI and machine learning are revolutionizing API security by providing sophisticated tools that enhance design, testing, threat detection, and overall governance. These technologies improve the robustness and resilience of APIs, enabling organizations to stay ahead of emerging threats and regulatory changes. As AI evolves, its role in API security will become increasingly vital, offering innovative solutions to the complex challenges of safeguarding digital assets. AI in API security goes beyond the limitations of human or rule-based interventions, enabling advanced pattern recognition and automating security audits and governance for greater defense against evolving threats.


The evolution of the CTO – from tech keeper to strategic leader

CTOs have experienced a huge shift in how they are positioned in the workplace. They are no longer part of a small-medium size team that operates separately from the rest of the business; they are the key to tangible business growth and perhaps one of the most crucial parts of a leadership team. The main duty of CTOs is to maintain – and where available, to modernise – tech, and to decide when something has kicked the bucket and no longer has a purpose. These things require people power, specialist skills and money. Needless to say, the investment in the role is vital. Tech leaders often feel burnt out, or worried that they don’t have the resources and support needed to do their job well. ... The saying goes, “You can never set foot in the same river twice,” and the same is true for leaders in tech – everything evolves from the moment you start working on a project. There is much to appreciate about technology that remains stable and adaptable when changes are necessary during development. Today, innovative CTOs are on the lookout for software solutions that come with the flexibility of making that important U-turns if ever needed.


How AIOps Is Transforming IT Operations Management

IT operations management has become increasingly challenging as networks have become larger and more complex, with the introduction of remote workers and the distribution of applications and workloads across networks. Traditional operations management tools and practices struggle to keep up with the ever-growing volumes of data from multiple sources within complex and varied network environments. AIOps was designed to bring the speed, accuracy and predictive capabilities of AI technology to IT operations. AIOps provides contextually enriched, deep end-to-end, real-time insights that can be proactively acted upon, according to Forrester. AIOps solutions use real-time telemetry, developing patterns and historical operational data to perform real-time assessments of what is happening, whether it has happened before or not, what paths it might take, and what negative effects it might have on business operations. ... A "digitally mature" organization has a much better ROI on the AI investment. But because this is a "rolling target" and not static, an organization's IT infrastructure "must be able to adapt and change," Ramamoorthy said.


The cyber assault on healthcare: What the Change Healthcare breach reveals

Many security leaders report that they don’t have adequate resources to implement the needed security measures because they’re often competing with pricey life-saving medical equipment for the limited funds available to spend, Kim says. Furthermore, he says their complex technology environments can make applying and creating security in depth not only more challenging but more costly, too. That, in turn, makes it less likely for CISOs to get the resources they need. Security teams in healthcare also have more challenges in updating and patching systems, Riggi explains, as the sector’s need for 24/7 availability means organizations can’t easily go offline — if they can go offline at all — to perform needed work. Healthcare security leaders also have a rapidly expanding tech environment to secure, as both more partners and more patients with remote medical devices become part of the sector’s already highly interconnected environment, says Errol S. Weiss, chief security officer at Health-ISAC. Such expansion heightens the challenges, complexities and costs of implementing security controls as well as heightening the risks that a successful attack against one point in that web would impact many others.


Solar Power Installations Worldwide Open to Cloud API Bugs

"The issue we discovered lies in the cloud APIs that connect the hardware with the user," both on Solarman's platform and on Deye Cloud, says Bogdan Botezatu, director of threat Research and reporting at Bitdefender. "These APIs have vulnerable endpoints that allow an unauthorized third party to change settings or otherwise control the inverters and data loggers via the vulnerable Solarman and Deye platforms," he says. Bitdefender, for instance, found that the Solarman platform's /oauth2-s/oauth/token API endpoint would let an attacker generate authorization tokens for any regular or business accounts on the platform. "This means that a malicious user could iterate through all accounts, take over any of them and modify inverter parameters or change how the inverter interacts with the grid," Bitdefender said in its report. The security vendor also found Solarman's API endpoints to be exposing an excessive amount of information — including personally identifiable information — about organizations and individuals on the platform. 


6 hard truths of generative AI in the enterprise

“Not a week goes by without another new tool that is mind-blowing in its abilities and potential future impact,’’ agrees David Higginson, chief innovation officer and executive vice president of Phoenix Children’s Hospital. But right now genAI “can really only be executed by a small number of technology giants rather than being tinkered with at a local skunkworks level within a healthcare organization,’’ he says. “Therefore, it feels as if we are in a bit of a paused state waiting for established vendors to deliver mature solutions that can provide the tangible value we all anticipated.” ... The fundamental barriers to adopting genAI are the scarcity and cost of the hardware, power, and data needed to train models, Higginson says. “With such scarcity comes the need to prioritize which solutions have the broadest appeal to the population and can generate the most long-term revenue,’’ he says. ... While research and development continue to push the needle on what genAI can do, “we know that data is a critical aspect to enabling AI solutions and we also recognize that many organizations are uncovering the work it will take to build the right data foundations to support scaled AI deployments,” says Deloitte’s Rowan.


Investing in Capacity to Adapt to Surprises in Software-Reliant Businesses

A well-known and contrarian adage in the Resilience Engineering community is that Murphy's Law - "anything that can go wrong, will" - is wrong. What can go wrong almost never does, but we don't tend to notice that. People engaged in modern work (not just software engineers) are continually adapting what they’re doing, according to the context they find themselves in. They’re able to avoid problems in most everything they do, almost all of the time. When things do go "sideways" and an issue crops up they need to handle or rectify, they are able to adapt to these situations due to the expertise they have. Research in decision-making described in the article Seeing the invisible: Perceptual-cognitive aspects of expertise by Klein, G. A., & Hoffman, R. R. (2020) reveals that while demonstrations of expertise play out in time-pressured and high-consequence events (like incident response), expertise comes from experience with facing varying situations involved with "ordinary" everyday work. It is "hidden" because the speed and ease with which experts do ordinary work contrasts with how sophisticated the work is. 



Quote for the day:

"True leadership must be for the benefit of the followers, not the enrichment of the leaders." -- Robert Townsend

Daily Tech Digest - August 11, 2024

Three Tips For Tackling Software Complexity And Technical Debt With Architectural Observability

ObservabilitySoftware teams and engineering leaders face the critical challenge of managing complex architectures, preventing architectural drift and addressing technical debt effectively. Without a clear understanding of their application’s architecture and the ability to observe changes over time, teams risk increased complexity, reduced agility and potential market irrelevance. ... By identifying the root cause of architectural complexity and improving application modularity, teams can move faster to create more resilient, scalable and maintainable applications. Continuously observing software architecture offers a real-time understanding of how it evolves from release to release to make better decisions about the right architectural choices for their business. ... The fast pace of release cycles has resulted in architects and engineers being overburdened and unsure where to begin in untangling complex architectures. With architectural observability, teams get a clearer sense of where to start. They can prioritize ATD remediation based on their most significant pain points. By prioritizing tasks according to pain point importance, teams ensure they solve the most urgent problems first.


Managing Technology Debt: Practical Tips to Improve Your Codebase

Identifying and prioritizing areas needing attention is the first step in managing technical debt. Regular code reviews are a practical approach to identifying and addressing unintentional technology debt before it escalates. Factors to consider when prioritizing technical debt include its ability to impede development cycles, functionality, and user experience. Creating greater transparency around technical debt can be achieved by tracking and communicating it regularly. Practices that can help assess technical debt include involving stakeholders, conducting regular code reviews, and having discussions about debt metaphors. ... If the tech debt is too extensive, it may make more sense to migrate away by building or acquiring new technology. We’ve employed this strategy in situations where the existing codebase was too brittle to justify extensive refactoring. An underlying platform to sync security and data between new and old solutions is essential for this strategy to work. There is often a high upfront cost for this strategy, but it can be a powerful way to avoid significant refactoring and loss of revenue from a brittle yet operational product. 


Aligning Cultural and Technical Maturity in Data Science

While some organizations boast high technical maturity with sophisticated data science teams, they may struggle with adoption across their organization. Conversely, others may have a strong cultural inclination towards data-driven decision-making but lack the technical infrastructure to support it. For organizations that are culturally ready to integrate data science into their business but are technically nascent -- referred to as “aspiring” -- there are practical steps to build a robust data science presence. The key is to start small, focusing on foundational skills and gradually tackling more complex problems as the team matures. ... One effective strategy for embedding data science teams within the business is to ensure you prioritize a solid methodological foundation. You can then bring those methodologies to life with the use of technical packages. These are blocks of code or algorithms that can be reused across the organization. They ensure consistency in methodology and save time by preventing data scientists from reinventing the wheel. 


AI could be the breakthrough that allows humanoid robots to jump from science fiction to reality

The potential applications of humanoid robots are vast and varied. Early modern research in humanoid robotics focused on developing robots to operate in extreme environments that are dangerous and difficult for human operators to access. These include Nasa’s Valkyrie robot, designed for space exploration. However, we will probably first see commercial humanoid robots deployed in controlled environments such as manufacturing. Robots such as Tesla’s Optimus could revolutionise manufacturing and logistics by performing tasks that require precision and endurance. They could work alongside human employees, enhancing productivity and safety. ... While the technological potential of humanoid robots is undeniable, the market viability of such products remains uncertain. Several factors will influence their acceptance and success, including cost, reliability, and public perception. Historically, the adoption of new technologies often faces hurdles related to consumer trust and affordability. For Tesla’s Optimus to succeed commercially, it must not only prove its technical capabilities but also demonstrate tangible benefits that outweigh its costs.


Harness software intelligence to conquer complexity and drive innovation

In addition to the technical challenges, the high cognitive load associated with working on a complex application can profoundly impact your team’s morale and job satisfaction. When developers feel overwhelmed, lack control over their work, and are constantly firefighting issues, they experience a sense of chaos and diminished agency. This lack of agency can lead to increased levels of stress and burnout. The ultimate result is higher attrition rates, as team members seek out opportunities where they feel more in control of their work and can make a more meaningful impact. The consequences of high attrition rates in your development team can be far-reaching. Not only does it disrupt the continuity of your projects and slow down progress, but it also results in a loss of valuable institutional knowledge. When experienced developers leave the company, they take with them a deep understanding of the application’s history, quirks, and best practices. This knowledge gap can be difficult to bridge as new team members struggle to get up to speed and navigate the complex codebase, often taking months to become productive. 


Five critical questions to help you increase business resilience

Take time to explore with your technology and engineering leaders how much visibility they have into risks. What tools do they use? Are there any specific roles charged with monitoring or interpreting system data? Does the team have the right capabilities? Do they have the time to pay attention to existing system performance? ... Every organization has its own culture and processes. That means the way problems are addressed and incidents responded to will likely be unique — for better and worse. However, it’s essential that business leaders get to know these processes. Do your technology teams have the resources needed to respond quickly? Are organizational structures helping them move as they need to or hindering them? What metrics are in place for measuring incident response times — and how do we measure up at the moment? ... In short, talk to your technology leaders about how they’re working to achieve software and delivery excellence — are we following best practices? Are we making informed decisions about tools? Are we bringing security decisions to bear on software early in the development process? Again, trust and honesty are important here. No one wants to talk about their limitations and what they’re not currently doing. 


Copyright Office Calls for Federal Law to Combat Unauthorized Deepfakes

A spate of legislation is in progress to address unauthorized deepfakes, but these laws are fragmented, focusing on specific applications. For instance, the Deepfakes Accountability Act aims to safeguard national security from deepfakes and Tennessee’s ELVIS Act safeguards vocal rights of musicians. “The impact is not limited to a select group of individuals, a particular industry, or a geographic location,” the Copyright Office said in its report, urging the need for comprehensive legislation. The office contended that current legal remedies for those harmed by unauthorized digital replicas are insufficient and that existing federal laws are “too narrowly drawn to fully address the harm from today’s sophisticated digital replicas.” Among the recommendations for federal legislation on deepfakes, the Copyright Office suggested protecting all individuals, not just celebrities, from unauthorized digital replicas. The proposed law would establish a federal right that protects all individuals during their lifetimes from the knowing distribution of unauthorized digital replicas.


From Accidental to Intentional: Your Roadmap to Architectural Excellence

One place to start is by identifying the primary purpose of IT in the organization. We’ve experienced all sorts of responses when we propose this as a starting point. From quizzical looks to downright shock is common. Yet, when organizations really take a look at their own internal beliefs, there is a wide discrepancy in the view of purpose. ... A common discussion with our clients includes a session to understand the pain points that they experience. Importantly, we work to learn who experiences the pain. We find it common for decision makers to disproportionately feel a lesser amount of pain under its current architectural state. Understanding why decision-makers feel less pain is a critical part of these discussions. Your technical team likely faces challenges meeting deadlines and budgets beyond their control, often accumulating technical debt. Technical debt is often the result of working around architectural deficiencies to meet these deadlines and remain within budget. ... To build a culture of improvement, start by providing the space and resources your team needs to tackle these challenges head-on. 


LLM progress is slowing — what will it mean for AI?

To see the trend, consider OpenAI’s releases. The leap from GPT-3 to GPT-3.5 was huge, propelling OpenAI into the public consciousness. The jump up to GPT-4 was also impressive, a giant step forward in power and capacity. Then came GPT-4 Turbo, which added some speed, then GPT-4 Vision, which really just unlocked GPT-4’s existing image recognition capabilities. And just a few weeks back, we saw the release of GPT-4o, which offered enhanced multi-modality but relatively little in terms of additional power. ... Because as the LLMs go, so goes the broader world of AI. Each substantial improvement in LLM power has made a big difference to what teams can build and, even more critically, get to work reliably. Think about chatbot effectiveness. With the original GPT-3, responses to user prompts could be hit-or-miss. Then we had GPT-3.5, which made it much easier to build a convincing chatbot and offered better, but still uneven, responses. It wasn’t until GPT-4 that we saw consistently on-target outputs from an LLM that actually followed directions and showed some level of reasoning. We expect to see GPT-5 soon, but OpenAI seems to be managing expectations carefully. 


Empowering Efficient DevOps with AI + Automation

Today’s DevOps practitioners must contend with technological challenges that were unimaginable when the term was first coined during the inaugural DevOpsDays conference in 2009. Since then, technology and data have scaled at a record-breaking rate, with the total amount of data created globally projected to nearly triple between 2020 and 2025. The management of this explosion of data in turn requires DevOps teams to navigate multiple clouds, networks, emerging technologies and more to conduct day-to-day operations. These disparate environments also lead to increased complexity and limited observability and keep information siloed, creating several challenges. ... Fortunately, DevOps teams are learning that a more intelligent and automated approach to IT management can help overcome the above challenges and unlock more efficiency, quality and value for the organization. By establishing a more agile and AI-enabled approach to IT operations management, DevOps practitioners can not only cope and keep pace with the modern landscape but thrive and drive innovation amid these challenges. While there is no single blueprint, organizations should focus on a holistic approach to streamlining and automating IT operations in modern hybrid cloud environments. 



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous

Daily Tech Digest - August 10, 2024

What to Look for in a Network Detection and Response (NDR) Product

NDR's practical limitation lies in its focus on the network layer, Orr says. Enterprises that have invested in NDR also need to address detection and response for multiple security layers, ranging from cloud workloads to endpoints and from servers to networks. "This integrated approach to cybersecurity is commonly referred to as Extended Detection and Response (XDR), or Managed Detection and Response (MDR) when provided by a managed service provider," he explains. Features such as Intrusion Prevention Systems (IPS), which are typically included with firewalls, are not as critical because they are already delivered via other vendors, Tadmor says. "Similarly, Endpoint Detection and Response (EDR) is being merged into the broader XDR (Extended Detection and Response) market, which includes EDR, NDR, and Identity Threat Detection and Response (ITDR), reducing the standalone importance of EDR in NDR solutions." ... Look for vendors that are focused on fast, accurate detection and response, advises Reade Taylor, an ex-IBM Internet security systems engineer, now the technology leader of managed services provider Cyber Command. 


AI In Business: Elevating CX & Energising Employees

Using AI in CX certainly eases business operations, but it’s ultimately a win for the customer too. As AI collects, analyses, and learns from large volumes of data, it delivers new worlds of actionable insights that empower businesses to get personal with their customer journeys. In the past years, businesses have tried their best to personalise the customer experience – but working with a handful of generic personas only gets you so far. Today’s AI, however, has the power to unlock next-level insights that help businesses discover customers’ expectations, wants, and needs so they can create individualised experiences on a 1-2-1 level. ... In human resources, AI further presents opportunities to help employees. For example, AI can elevate standard on-the-job training by creating personalised learning and development programmes for employees. Meanwhile, AI can also help job hunters find opportunities they may have overlooked. For example, far too many jobseekers have valuable and transferable skills but lack the experience in the right business vertical to land a job. According to NIESR, 63% of UK graduates are mismatched in this way. 


The benefits and pitfalls of platform engineering

The first step of platform engineering is to reduce tool sprawl by making clear what tools should make up the internal developer platform. The next step is to reduce context-switching between these tools which can result in significant time loss. By using a portal as a hub, users can find all of the information they need in one place without switching tabs constantly. This improves the developer experience and enhances productivity. ... In terms of scale, platform engineering can help an organization to better understand their services, workloads, traffic and APIs and manage them. This can come through auto-scaling rules, load balancing traffic, using TTL in self-service actions, and an API catalog. ... Often, as more platform tools are added and as more microservices are introduced - things become difficult to track - and this leads to an increase in deploy failures, longer feature development/discovery times, and general fatigue and developer dissatisfaction because of the unpredictably of bouncing around different platform tools to perform their work. There needs to be a way to track what’s happening throughout the SDLC.Adoption - how (and is it possible) to force developers to change the way they work


The irreversible footprint: Biometric data and the urgent need for right to be forgotten

The absence of clear definitions and categorisations of biometric data within current legislation highlights the need for comprehensive frameworks that specifically define rules governing its collection, storage, processing and deletion. Established legislation like the Information Technology Act, which were supplemented by subsequent ‘Rules’ for various digital governance aspects, can be used as a precedent. For instance, the 2021 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules were introduced to establish a robust complaint mechanism for social media and OTT platforms, addressing inadequacies in the Parent Act. To close the current regulatory loopholes, a separate set of rules governing biometric data under the Digital Personal Data Protection Act, 2023 should be considered. ... The ‘right to be forgotten’ must be a basic element of it, recognising people's sovereignty over their biometric data. Such focused regulations would not just bolster the safeguarding of biometric information, but also ensure compliance and accountability among entities handling sensitive data. Ultimately, this approach aims to cultivate a more resilient and privacy-conscious ecosystem within our dynamic digital landscape.


6 IT risk assessment frameworks compared

ISACA says implementation of COBIT is flexible, enabling organizations to customize their governance strategy via the framework. “COBIT, through its insatiable focus on governance and management of enterprise IT, aligns the IT infrastructure to business goals and maintains strategic advantage,” says Lucas Botzen, CEO at Rivermate, a provider of remote workforce and payroll services. “For governance and management of corporate IT, COBIT is a must,” says ... FAIR’s quantitative cyber risk assessment is applicable across sectors, and now emphasizes supply chain risk management and securing technologies such as internet of things (IoT) and artificial intelligence (AI), Shaw University’s Lewis says. Because it uses a quantitative risk management method, FAIR helps organizations determine how risks will affect their finances, Fuel Logic’s Vancil says. “This method lets you choose where to put your security money and how to balance risk and return best.” ... Conformity with ISO/IEC 27001 means an organization has put in place a system to manage risks related to the security of data owned or handled by the organization. The standard, “gives you a structured way to handle private company data and keep it safe,” Vancil says. 


Why is server cooling so important in the data center industry?

AI and other HPC sectors are continuing to drive up the power density of rack-mount server systems. This increased computer means increased power draw, which leads to increased heat generation. Removing that heat from the server systems in turn requires more power for high CFM (cubic feet per minute) fans. Liquid cooling technologies, including rack-level-cooling and immersion, can improve the efficiency of the heat removal from server systems, requiring less powerful fans. In turn, this can reduce the overall power budget of a rack of servers. When extrapolating this out across large sections of a data center footprint, the savings can add up significantly. When you consider some of the latest Nvidia rack offerings require 40KW or more, you can start to see how the power requirements are shifting to the extreme. For reference, it’s not uncommon for a lot of electronic trading co-locations to only offer 6-12KW racks, which are sometimes operated half-empty due to the servers requiring more power draw than the rack can provide. These trends are going to force data centers to adopt any technology that can reduce the power burden on not only their own infrastructure but also the local infrastructure that supplies them.


Cutting the High Cost of Testing Microservices

Given the high costs associated with environment duplication, it is worth considering alternative strategies. One approach is to use dynamic environment provisioning, where environments are created on demand and torn down when no longer needed. This method can help optimize resource utilization and reduce costs by avoiding the need for permanently duplicated setups. This can keep costs down but still comes with the trade-off of sending some testing to staging anyway. That’s because there are shortcuts that we must take to spin up these dynamic environments like using mocks for third-party services. This may put us back at square one in terms of testing reliability, that is how well our tests reflect what will happen in production. At this point, it’s reasonable to consider alternative methods that use technical fixes to make staging and other near-to-production environments easier to test on. ... While duplicating environments might seem like a practical solution for ensuring consistency in microservices, the infrastructure costs involved can be significant. By exploring alternative strategies such as dynamic provisioning and request isolation, organizations can better manage their resources and mitigate the financial impact of maintaining multiple environments.


The Cybersecurity Workforce Has an Immigration Problem

Creating a skilled immigration pathway for cybersecurity will require new policies. Chief among them is a mechanism to verify that applicants have relevant cybersecurity skills. One approach is allowing people to identify themselves by bringing forth previously unidentified bugs. This strategy is a natural way to prove aptitude and has the additional benefit of requiring no formal expertise or expensive testing. However, it would also require safe harbor provisions to protect individuals from prosecution under the Computer Fraud and Abuse Act. ... The West’s adversaries may also play a counterintuitive role in a cybersecurity workforce solution. Recent work from Eugenio Benincasa at ETH Zurich highlights the strength of China’s cybersecurity workforce. How many Chinese hackers might be tempted to immigrate to the West, if invited, for better pay and greater political freedom? While politically sensitive, a policy that allows foreign-trained cybersecurity experts to immigrate to the US could enhance the West’s workforce while depriving its adversaries of offensive talent. At the same time, such immigration programs must be measured and targeted to avoid adding tension to a world in which geopolitical conflict is already rising. 


Cross-Cloud: The Next Evolution in Cloud Computing?

The key difference between cross-cloud and multicloud is that cross-cloud spreads the same workload across-clouds. In contrast, multicloud simply means using more than one public cloud at the same time — with one cloud hosting some workloads and other clouds hosting other workloads. ... That said, in other respects, cross-cloud and multicloud offer similar benefits — although cross-cloud allows organizations to double down on some of those benefits. For instance, a multicloud strategy can help reduce cloud costs by allowing you to pick and choose from among multiple clouds for different types of workloads, depending on which cloud offers the best pricing for different types of services. One cloud might offer more cost-effective virtual servers, for example, while another has cheaper object storage. As a result, you use one cloud to host VM-based workloads and another to store data. You can do something similar with cross-cloud, but in a more granular way. Instead of having to devote an entire workload to one cloud or another depending on which cloud offers the best overall pricing for that type of workload, you can run some parts of the workload on one cloud and others on a different cloud. 


Will We Survive The Transitive Vulnerability Locusts

The issue today is that modern software development resembles constructing with Legos, where applications are built using numerous open-source dependencies — no one writes frameworks from scratch anymore. With each dependency comes the very real probability of inherited vulnerabilities. When unique applications are then built on top of those frameworks, it turns into a patchwork of potential vulnerability dependencies that are stitched together with our own proprietary code, without any mitigation of the existing vulnerabilities. ... With a proposed solution, it would be easy to conclude that we have fixed the problem. Given this vulnerability, we could just patch it and be secure, right? But after we updated the manifest file, and theoretically removed the transitive vulnerability, it still showed up in the SCA scan. After two tries at remediating the problem, we recognized that two variable versions were present. Using the SCA scan, we determined the root cause of the vulnerability had been imported and used. This is a fine manual fix, but reproducing this process manually at scale is near-impossible. We therefore decided to test whether we could group CVE behavior by their common weakness enumeration (CWE) classification. 



Quote for the day:

"You are the only one who can use your ability. It is an awesome responsibility." -- Zig Ziglar

Daily Tech Digest - August 09, 2024

High-Performance IT Strategy Drives Business Value From AI

CIOs and technology leaders have always aimed to ensure IT-business alignment, but achieving it proved challenging. Forrester's research indicated that firms with strong alignment grew 2.4 times faster than their peers and were twice as profitable. "Businesses often do not state their requirements clearly, and IT leaders struggle to understand them," Sharma said. ... Business is based on trust, and companies that people trust can earn more loyalty and advocacy. Because technology is central to customer experience, trust is vital in HPIT. Forrester's data showed that people who trusted a company were 1.8 times more likely to recommend that company to friends and peers. "We have found that the companies that can create mutual trust between business and IT - and business and their customers - tend to outperform their peers in the market," Sharma said. ... Organizations need to keep pace with the rapid technological changes. The swift evolution of technology necessitated quick adaptation and scaling to meet unique and common business needs. "Alignment is ongoing. You need to change your technology skills, practices and even the technology itself," Sharma said.


How to train an AI-enabled workforce — and why you need to

Building an AI team is an evolving process, just as genAI itself is steadily evolving — even week to week. “First, it’s crucial to understand what the organization wants to do with AI,” Corey Hynes, executive chair and founder at IT training company Skillable, said in an earlier interview with Computerworld. “Second, there must be an appetite for innovation and dedication to it, and a strategy — don’t embark on AI efforts without due investment and thought. Once you understand the purpose and goal, then you look for the right team,” Hynes added. ... Corporate AI initiatives, Alba said, are similar to the shift that took place when the internet or cloud computing took hold, and there was “a sudden upskilling” in the name of productivity. Major technology market shifts also affect how employees think about their careers. “Am I getting the right development opportunities from my employer? Am I being upskilled?” Alba said. “How upfront are we about leveraging some of these innovations? Am I using a private LLM at my employer? If not, am I using some of the public tools, i.e. OpenAI and ChatGPT? How much on the cutting edge am I getting and how much are we innovating?”


Immutability in Cybersecurity: A Layer of Security Amidst Complexity and Misconceptions

An immutable server provides an environmental defense for the data it contains. It generally uses a stripped down operating system and configuration that does not allow, or severely limits, third-party access. Under such circumstances, any attempted access and any unusual activity is potentially malicious. Once configured, the server’s state is fixed – the software, configuration files, and data on the server cannot be modified directly. If this somehow does happen, the data contained can be burned, a new server with the same system configuration can be stood up, and fresh data from backup could be uploaded. It means, in theory, the immutable server could always be secure and contain the latest data. ... Immutable backup is a copy of data that cannot be altered, changed, or deleted. It is fundamentally some form of write once, read many times technology. Anthony Cusimano, director of technical marketing at Object First, provides more detail. “Immutable backup storage is a type of data repository where information cannot be modified, deleted, or overwritten for a set period. Most immutable storage targets are object storage and use an ‘object lock’ mechanism to prevent unintentional or deliberate alterations or deletions.”


CrowdStrike's Legal Pressures Mount, Could Blaze Path to Liability

Currently, the bar is so high for bringing a successful case against a software maker that most attorneys are disincentivized to even try, says Fordham's Sharma. "How these cases go will give us a lot of insight into how high are these barriers, what needs to be reformed," she says. "We don't have a lot of case law on this ... so this will be very exemplary in shedding light on exactly what the contours of those barriers are." The software liability landscape is currently pretty craggy. While simple on its surface — "software makers must be held responsible for insecure software" — even the question of who is responsible can quickly become complex, as the interplay between Delta Airlines, CrowdStrike, and Microsoft shows. Software liability legislation and regulations would have to solve this issue and many others, the Atlantic Council's Cyber Statecraft Initiative stated in a 32-page analysis published earlier this year. "Software security is a problem of 'shared responsibility': users of software, in addition to its developers, have significant control over cybersecurity outcomes through their own security practices," the report stated. 


Meet Prompt Poet: The Google-acquired tool revolutionizing LLM prompt engineering

Prompt Poet is a groundbreaking tool developed by Character.ai, a platform and makerspace for personalized conversational AIs, which was recently acquired by Google. Prompt Poet potentially offers a look at the future direction of prompt context management across Google’s AI projects, such as Gemini. ... Customizing an LLM application often involves giving it detailed instructions about how to behave. This might mean defining a personality type, a specific situation, or even emulating a historical figure.  Customizing an LLM application, such as a chatbot, often means giving it specific instructions about how to act. This might mean describing a certain type of personality type, situation, or role, or even a specific historical or fictional person. ... Data can be loaded in manually, just by typing it into ChatGPT. If you ask for advice about how to install some software, you have to tell it about your hardware. If you ask for help crafting the perfect resume, you have to tell it your skills and work history first. However, while this is ok for personal use, it does not work for development. Even for personal use, manually inputting data for each interaction can be tedious and error-prone.


The rise of the ‘machine defendant’ – who’s to blame when an AI makes mistakes?

One of the most obvious risks is that “bad actors” – such as organised crime groups and rogue nation states – use the technology to deliberately cause harm. This could include using deepfakes and other misinformation to influence elections, or to conduct cybercrimes en masse. ... Less dramatic, but still highly problematic, are the risks that arise when we entrust important tasks and responsibilities to AI, particularly in running businesses and other essential services. It’s certainly no stretch of the imagination to envisage a future global tech outage caused by computer code written and shipped entirely by AI. When these AIs make autonomous decisions that inadvertently cause harm – whether financial loss or actual injury – whom do we hold liable? ... Market forces are already driving things rapidly forward in artificial intelligence. To where, exactly, is less certain. It may turn out that the common law we have now, developed through the courts, is adaptable enough to deal with these new problems. But it’s also possible we’ll find current laws lacking, which could add a sense of injustice to any future disasters.


Making the gen AI and data connection work

Faced with insufficient datasets and the risk of training ML systems with copyrighted data, the challenges for today's CIOs span from privacy and security, to compliance and anonymization. So what can CIOs do beyond being vigilant about regulation and collaborating with fellow managers to help instill trust in AI? ... The real challenge, however, is to “demonstrate and estimate” the value of projects not only in relation to TCO and the broad-spectrum benefits that can be obtained, but also in the face of obstacles such as lack of confidence in tech aspects of AI, and difficulties of having sufficient data volumes. But these are not insurmountable challenges. ... Gartner agrees that synthetic data can help solve the data availability problem for AI products, as well as privacy, compliance, and anonymization challenges. Synthetic data can be generated to reflect the same statistical characteristics as real data, but without revealing personally identifiable information, thereby complying with privacy-by–design regulations and other sensitive details. 


What’s the Difference Between Observability and Monitoring?

Monitoring acts like a vigilant guard, constantly checking system health against predefined thresholds for signs of trouble. Its primary goal is to track the health and performance of systems based on established metrics and logs, like CPU utilization, memory usage, server response times or even application-specific data points. ... While monitoring excels at identifying deviations, observability aims to understand the system’s internal state by analyzing its external outputs. Similar to a detective looking at clues at a crime scene, observability gathers all available data (metrics, logs, traces and events) to not only identify the issue but also uncover its root cause. This holistic view allows teams to diagnose complex problems and anticipate potential breakdowns before they occur. ... Observability and monitoring are complementary rather than alternative practices. Monitoring is a vigilant guard while observability is a thoughtful analyst. Being able to react to some issues immediately while preventing others and making overall system improvements over time — the winning strategy combines both.


Microsoft Teams offers more for developers

Much of the new developer functionality comes from an updated JavaScript library: TeamsJS 2.0. As it offers a lot of backwards compatibility, older applications can be quickly ported to the latest release, adding support for Outlook as well as Teams. Some changes will need to be made, for example, updating code to support more modern JavaScript asynchronous capabilities. At the same time, there’s been a reorganization of the library’s APIs, grouping them by capability. Microsoft has updated its Visual Studio Code Teams Toolkit to help with application migrations. This automates the process of updating dependencies and app manifests, providing notifications of where you need to update interfaces and callbacks. It’s not completely automatic, but it does help you start making necessary changes. ... Another interesting developer feature is support for Mermaid, a JavaScript-based language that allows you to quickly add charts and diagrams. Again, this can be used collaboratively, enabling architects and other development team members to dynamically document code snippets, showing how they interact and what functionality they offer. 


Quantum Cryptography Has Everyone Scrambling

At the center of these varied cryptography efforts is the distinction between QKD and post-quantum cryptography (PQC) systems. QKD is based on quantum physics, which holds that entangled qubits can store their shared information so securely that any effort to uncover it is unavoidably detectable. Sending pairs of entangled-photon qubits to both ends of a network provides the basis for physically secure cryptographic keys that can lock down data packets sent across that network. ... Typically, quantum cryptography systems are built around photon sources that chirp out entangled photon pairs—where photon A heading down one length of fiber has a polarization that’s perpendicular to the polarization of photon B heading in the other direction. The recipients of these two photons perform separate measurements that enable both recipients to know that they and only they have the shared information transmitted by these photon pairs. ... By contrast, post-quantum cryptography (PQC) is based not around quantum physics but pure math, in which next-generation cryptographic algorithms are designed to run on conventional computers. 



Quote for the day:

"We get our power from the people we lead, not from our stars and our bars." -- J. Stanford

Daily Tech Digest - August 08, 2024

4 Common LCNC Security Vulnerabilities and How To Mitigate Them

While LCNC platforms allow access restrictions on the data, they are applied on the client side by default. Unfortunately, a user with access to the application can bypass these restrictions and gain unauthorized access to the underlying data sources. Citizen developers might not be aware of the risk associated with default settings when configuring access rules. This can cause an external breach if the application is accessible over the internet or a report is published on the web. ... Apps and automation created on LCNC platforms are not immune to traditional web application vulnerabilities such as SQL injection. Consider a form for collecting user complaints that can be exploited by injecting SQL code, allowing an attacker from the internet to retrieve sensitive data, including usernames and salaries, from the database. This vulnerability arises when developers include user input directly in SQL queries without proper parameterization. ... Citizen developers mistakenly use LCNC applications and automation to send sensitive data through personal emails, store corporate data insecurely in public network drives, and generate and distribute anonymous access links to corporate resources. 


EU’s DORA regulation explained: New risk management requirements for financial firms

The EU says that despite the financial sector’s increased reliance on IT firms, there is a lack of specific powers to address ICT risks arising from those third parties. The act puts critical ICT third-party service providers into the scope of regulators and subject them to an oversight framework at the EU level. “DORA continues the impetus over the past decade in outsourced and third-party governance,” says Chaudhry, “with a focus on chain outsourcing and resiliency, with clarity that critical ICT third-party providers, including cloud service providers, need to be within the regulatory perimeter.” Under these rules, European Supervisory Authorities (ESAs) would have the right to access documents, carry out inspections, and subject third parties to fines if deemed necessary. ... In an early analysis of the regulation, Deloitte said that most firms in the sector would welcome the introduction of an oversight framework as it will provide more legal certainty around what is permissible, a level of assurance on the security of their assets in the cloud, and likely increase firms’ confidence and appetite for transitioning some of their activities to the cloud.


No god in the machine: the pitfalls of AI worship

The problem of theodicy has been a topic of debate among theologians for centuries. It asks: if an absolutely good God is omniscient, omnipotent and omnipresent, how can evil exist when God knows it will happen and can stop it? It radically oversimplifies the theological issue, but theodicy, too, is in some ways a kind of logical puzzle, a pattern of ideas that can be recombined in particular ways. I don’t mean to say that AI can solve our deepest epistemological or philosophical questions, but it does suggest that the line between thinking beings and pattern recognition machines is not quite as hard and bright as we may have hoped. The sense of there being a thinking thing behind AI chatbots is also driven by the now common wisdom that we don’t know exactly how AI systems work. What’s called the black box problem is often framed in mystical terms – the robots are so far ahead or so alien that they are doing something we can’t comprehend. That is true, but not quite in the way it sounds. New York University professor Leif Weatherby suggests that the models are processing so many permutations of data that it is impossible for a single person to wrap their head around it. 


Critical AWS Vulnerabilities Allow S3 Attack Bonanza

The researchers first uncovered Bucket Monopoly, an attack method that can significantly boost the success rate of attacks that exploit AWS S3 buckets — i.e., online storage containers for managing objects, such as files or images, and resources required for storing operational data. The issue is that S3 storage buckets were designed to use predictable, easy-to-guess AWS account IDs instead of a unique identifier for each bucket name using a hash or qualifier. "Sometimes the only thing that an attacker needs to know about an organization is their public account ID for AWS, which is not considered sensitive data right now, but we recommend it is something that an organization should keep as a secret," Kadkoda says. To mitigate the issue, AWS changed the default configurations. "All of the services have been fixed by AWS in that they no longer create the bucket name automatically," he explains. "AWS now adds a random identifier or sequence number if the desired bucket name already exists." Security researchers and AWS customers have long debated whether AWS account IDs should be public or private. 


Data Ethics: New Frontiers in Data Governance

While morals concern subjective notions of good and bad, and laws concern the limits of what is socially acceptable, Aiken and Lopez define ethics as “the difference between what you have the right to do and what is the right thing to do.” Navigating that crucial difference is rarely cut and dried even in simple, day-to-day personal interactions. Still, within the world of data, ethical questions can quickly take on multiple dimensions and present challenges unique to the field. Assessing data ethics can be decidedly confusing, for as Lopez pointed out, “Not all things that are bad for data are actually bad for the world … and vice versa.” Whereas the ethical actions and judgments that we make as private individuals tend to play out within a limited set of factors, the implications of even the most innocuous events within large-scale data management can be huge. Company data exists in “space,” potentially flowing between departments and projects, but privacy agreements and other safeguards that apply for some purposes may not apply to others. Data from spreadsheets authored for in-house analytics, for example, might violate a client privacy agreement if it migrates to open cloud storage.


How network segmentation can strengthen visibility in OT networks

First, it’s crucial to have a comprehensive understanding of the data flow within the environment — knowing what information needs to move and where. Often, technical documentation about operational design is outdated or incomplete, missing details about current data flows and usage. Second, most visibility tools in this space require specific network configurations because traditional antivirus or endpoint protection software isn’t typically viable for these devices. Therefore, it’s necessary to have mechanisms for routing traffic to inspection points. Since many OT networks are designed for resilience and uptime rather than cybersecurity, reconfiguring them to enable traffic inspection can be challenging. Network segmentation projects are time-consuming, expensive, and may lead to operational downtime, which is usually unacceptable in OT environments. The visibility tool story requires the identification of legacy technologies which tend to run rampant in OT networks and won’t support the changes necessary to feed the tools. These can include unmanaged switches, network devices that don’t support RSPAN, and outdated or oversubscribed cabling infrastructure.


Is The AI Bubble About To Burst?

While it is said that AI could add around $15 trillion to the value of the global economy, recent earnings reports from the likes of Google and Tesla have been less than stellar, leading to the recent dips in share prices. At the same time, there are reports that the general public is becoming more distrustful of AI and that businesses are finding it difficult to make money from it. Does this mean that the AI revolution—touted as holding the solution to problems as diverse as curing cancer and saving the environment—is about to come crashing down around our ears? ... However, it's important to note that even these tech giants aren't immune to external pressures. The ongoing Google antitrust case, for instance, could have far-reaching implications not just for Google, but for other major players in the tech industry as well. Nvidia is already facing two separate antitrust probes from the U.S. Department of Justice, focusing on its acquisition of Run:ai and alleged anti-competitive practices in the AI chip market. These legal and regulatory challenges could potentially reshape the landscape for Big Tech's AI ambitions. It's also worth mentioning that while the established tech companies have diversified revenue streams, there are newer players like OpenAI and Anthropic that are primarily focused on AI. 


Overcoming Human Error in Payment Fraud: Can AI Help?

Scammers usually target accounts payable departments, which processes payments to suppliers and vendors. They typically pose as an existing supplier and send fraudulent invoices to an organization or even digitally gain access to a company's AP processes to authorize large payments, said Infosys. ... Accounts payable automation solutions can flag minute discrepancies in invoices, such as a new address or new bank account details, that manual process might miss. Alerts can prompt companies to follow up with their vendors to verify the legitimacy of invoices before processing payments. ... Businesses see the potential for AI to reduce fraud losses in B2B payments. Companies can use AI to examine historical data to identify patterns, detect anomalies and automate routine tasks such as data entry and calculations. They can use crowdsourced data from vendors to streamline processes and enhance trust. Technologies that provide end-to-end visibility of the entire B2B payment ecosystem offer a comprehensive view, helping detect and prevent issues arising from human errors. Some organizations have launched AI-based initiatives to fight fraud, but the it's too soon to see results. 


Post-quantum encryption: Crypto flexibility will prepare firms for quantum threat, experts say

For enterprises, there are two big challenges that come with quantum computers. First of all, we don’t know when the day will come when a quantum computer breaks classical encryption, making it hard to plan for. It would be tempting to put off solving the problem until the quantum computers are here – and then it will be too late. Second, there is the ‘collect now, decrypt later’ threat. Major intelligence agencies may be – and almost certainly are – collecting any and all data they can get their hands on, planning ahead for a future where they can decrypt it all. “They’ve been doing it forever,” Lyubashevsky says. ... One problem, he says, is that encryption is often buried deep inside code libraries and third-party products and services. Or fourth or fifth party. “You have to get a cryptographic bill of materials to discover the cryptography inside – and that’s not easy,” he says. And that’s just the first challenge. Once all the encryption is identified, it needs to be replaced with a modern, flexible system. And that’s not always possible if parts of the system that are beyond your control have older encrypted hard-coded.


Study backer: Catastrophic takes on Agile overemphasize new features

"Testing is kind of one of those tools that are there, but in order for testing to actually be able to work at all you need to know what you're testing. So you need good requirements to outline the non-functional requirements that are there." Such as reliability. "The interesting thing is that a lot of people, I think, in the Agile community, a lot of the Agile fundamentalists will argue that user stories are sufficient. These essentially just describe functional behavior, but they lack a generalizable specification or nonfunctional requirements." "And so I think that's one of the key flaws. When you end up looking at the most dogmatic application of Agile, we just have user stories, but you've lacked that generalizable specification." ... For software engineering, however, things are less rosy. He points to an interpretation of DevOps where issues don't really matter as long as the system recovers from them, and velocity and quality are never in conflict. "This has led to absolutely catastrophic outcomes in the past." However, it is organizational transformation, where a methodology and mindset branded as "Agile" is applied across a business, which is where the wheels can really come off. 



Quote for the day:

"Nobody who has ever given his best has regretted it." -- George Halas