Daily Tech Digest - May 22, 2024

Guide to Kubernetes Security Posture Management (KSPM)

Bad security posture impacts your ability to respond to new and emerging threats because of extra “strain” on your security capabilities caused by misconfigurations, gaps in tooling, or inadequate training. ... GitOps manages all cluster changes via Configuration as Code (CaC) in Git, eliminating manual cluster modifications. This approach aligns with the Principle of Least Privilege and offers benefits beyond security. GitOps ensures deployment predictability, stability and admin awareness of the cluster’s state, preventing configuration drift and maintaining consistency across test and production clusters. Additionally, it reduces the number of users with write access, enhancing security. ... Human log analysis is crucial for retrospectively reviewing security incidents. However, real-time monitoring and correlation are essential for detecting incidents initially. While manual methods like SIEM solutions with dashboards and alerts can be effective, they require significant time and effort to extract relevant data. 


Where’s the ROI for AI? CIOs struggle to find it

The AI market is still developing, and some companies are adopting the technology without a specific use case in mind, he adds. Kane has seen companies roll out Microsoft Copilot, for example, without any employee training about its uses. ... “I have found very few companies who have found ROI with AI at all thus far,” he adds. “Most companies are simply playing with the novelty of AI still.” The concern about calculating the ROI also rings true to Stuart King, CTO of cybersecurity consulting firm AnzenSage and developer of an AI-powered risk assessment tool for industrial facilities. With the recent red-hot hype over AI, many IT leaders are adopting the technology before they know what to do with it, he says. “I think back to the first discussions that we had within the organizations that are working with, and it was a case of, ‘Here’s this great new thing that we can use now, let’s go out and find a use for it,’” he says. “What you really want to be doing is finding a problem to solve with it first.” As a developer who has integrated AI into his own software, King is not an AI skeptic. 


100 Groups Urge Feds to Put UHG on Hook for Breach Notices

Some experts advise HIPAA-regulated entities that are likely affected by a Change Healthcare breach to take precautionary measures now to prepare for their potential notification duties involving a compromise of their patients' PHI. ... HIPAA-regulated Change Healthcare customers also have an obligation under HIPAA to perform "reasonable diligence" to investigate and obtain information about the incident to determine whether the incident triggers notice obligations to their patients or members, said attorney Sara Goldstein of law firm BakerHostetler. Reasonable diligence includes Change Healthcare customers frequently checking UHG and Optum's websites for updates on the restoration and data analysis process, contacting their Change Healthcare account representative on a regular basis to see if there are any updates specific to their organization, and engaging outside privacy counsel to submit a request for information directly to UnitedHealth Group to obtain further information about the incident, Goldstein said.


‘Innovation Theater’ in Banking Gives Way to a More Realistic and Productive Function

The conservative approach many institutions are taking to GenAI reflects that reality. Buy Now, Pay Later meanwhile makes a great example of how exciting new innovations can unexpectedly reveal a dark side. ... In many institutions, innovation has become less about pure invention and more about applying what’s out there already in new ways and combinations to solve common problems. Doing so doesn’t necessarily require geniuses, but you do need highly specialized “plumbers” who can link together multiple technologies in smart ways. Even the regulatory view has evolved. There was a time when federal regulators held open doors to innovation, even to the extent of offering “sandboxes” to let innovations sprout without weighing them down initially with compliance burdens. But the Consumer Financial Protection Bureau, under the Biden administration, did away with its sandbox early on. Washington today walks a more cautious line on innovation, and that line could veer. The bottom line? Innovators who take their jobs, and the impact of their jobs, seriously, realize that banking innovation must grow up.


AI glasses + multimodal AI = a massive new industry

Both OpenAI and Google demos clearly reveal a future where, thanks to the video mode in multimodal AI, we’ll be able to show AI something, or a room full of somethings, and engage with a chatbot to help us know, process, remember or understand. It would be all very natural, except for one awkward element. All this holding and waving around of phones to show it what we want it so “see” is completely unnatural. Obviously — obviously! — video-enabled multimodal AI is headed for face computers, a.k.a. AI glasses. And, in fact, one of the most intriguing elements of the Google demo was that during a video demonstration, the demonstrator asked Astra-enhanced Gemini if it remembered where her glasses were, and it directed her back to a table, where she picked up the glasses and put them on. At that point, the glasses — which were prototype AI glasses — seamlessly took over the chat session from the phone (the whole thing was surely still running on the phone, with the glasses providing the camera, microphones and so on).
 

Technological complexity drives new wave of identity risks

The concept zero standing privilege (ZSP) requires that a user only be granted the minimum levels of access and privilege needed to complete a task, and only for a limited amount of time. Should an attacker gain entry to a user’s account, ZSP ensures there is far less potential for attackers to access sensitive data and systems. The study found that 93% of security leaders believe ZSP is effective at reducing access risks within their organization. Additionally, 91% reported that ZSP is being enforced across at least some of their company’s systems. As security leaders face greater complexity across their organizations’ systems and escalating attacks from adversaries, it’s no surprise that risk reduction was cited as respondents’ top priority for identity and access management (55%). This was followed by improving team productivity (50%) and automating processes (47%). Interestingly, improving user experience was cited as the top priority among respondents who experienced multiple instances of attacks or breaches due to improper access in the last year.


The Legal Issues to Consider When Adopting AI

Different types of data bring different issues of consent and liability. For example, consider whether your data is personally identifiable information, synthetic content (typically generated by another AI system), or someone else’s intellectual property. Data minimization—using only what you need—is a good principle to apply at this stage. Pay careful attention to how you obtained the data. OpenAI has been sued for scraping personal data to train its algorithms. And, as explained below, data-scraping can raise questions of copyright infringement. ... Companies also need to consider the potential forinadvertent leakage of confidential and trade-secret information by an AI product. If allowing employees to internally use technologies such as ChatGPT (for text) and Github Copilot (for code generation), companies should note that such generative AI tools often take user prompts and outputs as training data to further improve their models. Luckily, generative AI companies typically offer more secure services and the ability to opt out of model training.


How innovative power sourcing can propel data centers toward sustainability

The increasing adoption of Generative AI technologies over the past few years has placed unprecedented energy demands on data centers, coinciding with a global energy emergency exacerbated by geopolitical crises. Electricity prices have since reached record highs in certain markets, while oil prices soared to their highest level in over 15 years. Volatile energy markets have awakened a need in the general population to become more flexible in their energy use. At the same time, the trends present an opportunity for the data center sector to get ahead of the game. By becoming managers of energy, as opposed to just consumers, market players can find more efficient and cost-effective ways to source power. Innovative renewable options present a highly attractive avenue in this regard. As a result, data center providers are working more collaboratively with the energy sector for solutions. And for them, it’s increasingly likely that optimizing efficiency won’t be just about being close to the grid, but also about being close to the power-generation site – or even generating and storing power on-site.


Google DeepMind Introduces the Frontier Safety Framework

Existing protocols for AI safety focus on mitigating risks from existing AI systems. Some of these methods include alignment research, which trains models to act within human values, and implementing responsible AI practices to manage immediate threats. However, these approaches are mainly reactive and address present-day risks, without accounting for the potential future risks from more advanced AI capabilities. In contrast, the Frontier Safety Framework is a proactive set of protocols designed to identify and mitigate future risks from advanced AI models. The framework is exploratory and intended to evolve as more is learned about AI risks and evaluations. It focuses on severe risks resulting from powerful capabilities at the model level, such as exceptional agency or sophisticated cyber capabilities. The Framework aims to align with existing research and Google’s suite of AI responsibility and safety practices, providing a comprehensive approach to preventing any potential threats.


Proof-of-concept quantum repeaters bring quantum networks a big step closer

There are two main near-term use cases for quantum networks. The first use case is to transmit encryption keys. The idea is that public key encryption – the type currently used to secure Internet traffic – could soon be broken by quantum computers. Symmetrical encryption – where the same key is used to both encrypt and decrypt messages – is more future proof, but you need a way to get that key to the other party. ... Today, however, the encryption we currently have is good enough, and there’s no immediate need for companies to look for secure quantum networks. Plus, there’s progress already being made on creating quantum-proof encryption algorithms. The other use for quantum networks is to connect quantum computers. Since quantum networks transmit entangled photons, the computers so connected would also be entangled, theoretically allowing for the creation of clustered quantum computers that act as a single machine. “There are ideas for how to take quantum repeaters and parallelize them to provide very high connectivity between quantum computers,” says Oskar Painter, director of quantum hardware at AWS. 



Quote for the day:

"Many of life’s failures are people who did not realize how close they were to success when they gave up." -- Thomas Edison

Daily Tech Digest - May 21, 2024

Most Software Engineers Know Nothing About Hardware

While most software engineers would want to believe that there is not a need for them to know the intricacies of hardware, as long as what they are using offers support for the software they want to use and build. But on the contrary, a user offered a thought-provoking take, suggesting that understanding hardware could bolster several fields, such as cybersecurity. “I think it would help in programming to know how the chip and memory think only to secure the program from hackers,” he said. This highlights a practical benefit of hardware knowledge that goes beyond mere academic interest. Moreover, software engineers who know a thing or two about hardware can create better softwares and build good software capability on the hardware. This perspective suggests that a deeper understanding of hardware can lead to more efficient and innovative software solutions. The roles of software engineers are also changing with the advent of AI tools. For over a decade, a popular belief has been that a computer science degree is all you need to tread the path to wealth, especially in a country like India. 


Network teams are ready to switch tool vendors

For a variety of reasons, network management tools have historically been sticky in IT organizations. First, tool vendors sold them with perpetual licenses, which meant a long-term investment. Second, tools could take time to implement, especially for larger companies that invest months of time customizing data collection mechanisms, dashboards, alerts, and more. Also, many tools were difficult to use, so they came with a learning curve. But things have changed. Most network management tools are now available as SaaS solutions with a subscription license. Many vendors have developed new automation features and AI-driven features that reduce the amount of customization that some IT organizations will need to do. ... For all these reasons, many IT organizations feel less locked into their network management tools today. Still, it’s important to note that replacing tools remains challenging. In fact, network teams that struggle to hire and retain skilled personnel are less likely to replace a tool. They don’t have the capacity to tackle such a project because they’re barely keeping up with day-to-day operations. Larger enterprises, which have larger and more complex networks, were also less open to new tools.


Reducing CIO-CISO tension requires recognizing the signs

In the case of highly critical vulnerabilities that have been exploited, the CISO will want patches applied immediately, and the CIO is likely aligned with this urgency. But for medium-level patches, the CIO may be under pressure to defer these disruptions to production systems, and may push back on the CISO to wait a week or even months before patching. ... Incident management is another are ripe for tension. The CISO has a leadership role to play when there is a serious cyber or business disruption incident, and is often the“messenger” that shares the bad news. Naturally, the CIO wants to be immediately informed, but often the details are sparse with many unknowns. This can make the CISO look bad to the CIO, as there are often more questions than answers at this early stage. ... A fifth example is DevOps, as many CIOs, including myself, advocate for continuous delivery at velocity. Unfortunately, not as many CIOs advocate for DevSecOps to embed cybersecurity testing in the process. This is perhaps because the CIO is often under pressure from executive stakeholders to release new software builds and thus accept the risk that there may be some iteration required if this is not perfect.


Strategies for combating AI-enhanced BEC attacks

In addition to employee training and a zero-trust approach, companies should leverage continuous monitoring and risk-based access decisions. Security teams can use advanced analytics to monitor user activity and identify anomalies that might indicate suspicious behavior. Additionally, zero trust allows for implementing risk-based access controls – for example, access from an unrecognized location might trigger a stronger authentication challenge or require additional approval before granting access. Security teams can also use network segmentation to contain threats. This involves dividing the network into smaller compartments. So, even if attackers manage to breach one section, their movement is restricted, preventing them from compromising the entire network. ... Building a robust defense against BEC attacks requires a layered approach. Comprehensive security strategies that leverage zero trust are a must. However, they can’t do all the heavy lifting alone. Businesses must also empower their employees to make the right decisions by investing in security awareness training that incorporates real-world scenarios and teaches employees how to identify and report suspicious activities.


From sci-fi to reality: The dawn of emotionally intelligent AI

Greater ability to integrate audio, visual and textual data opens potentially transformative opportunities in sectors like healthcare, where it could lead to more nuanced patient interaction and personalized care plans. ... As GPT-4o and similar offerings continue to evolve, we can anticipate more sophisticated forms of natural language understanding and emotional intelligence. This could lead to AI that not only understands complex human emotions but also responds in increasingly appropriate and helpful ways. The future might see AI becoming an integral part of emotional support networks, providing companionship and aid that feels genuinely empathetic and informed. The journey of AI from niche technology to a fundamental part of our daily interactions is both exhilarating and daunting. To navigate this AI revolution responsibly, it is essential for developers, users and policymakers to engage in a rigorous and ongoing dialogue about the ethical use of these technologies. As GPT-4o and similar AI tools become more embedded in our daily lives, we must navigate this transformative journey with wisdom and foresight, ensuring AI remains a tool that empowers rather than diminishes our humanity.


Unlocking DevOps Mastery: A Comprehensive Guide to Success

From code analysis and vulnerability scanning to access control and identity management, organizations must implement comprehensive security controls to mitigate risks throughout the software development lifecycle. Furthermore, compliance with industry standards and regulatory requirements must be baked into the DevOps process from the outset rather than treated as an afterthought. Moreover, organizations must be vigilant about ethical considerations and algorithmic bias in environments leveraging AI and machine learning, where the stakes are heightened. By embedding security and compliance into every stage of the DevOps pipeline, organizations can build trust and confidence among stakeholders and mitigate potential risks to their reputation and bottom line. DevSecOps, an extension of DevOps, emphasizes integrating security practices throughout the software development lifecycle (SDLC). Several key security practices and frameworks should be integrated into the DevOps program. 


Composable Enterprise: The Evolution of MACH and Jamstack

As the Jamstack and the MACH Architecture continue to evolve, categorizing the MACH architecture as “Jamstack for the enterprise” might not entirely be accurate, but it’s undeniable that the MACH approach has been gaining traction among vendors and has increasing appeal to enterprise customers. Demeny points out that the MACH Alliance recently celebrated passing the 100 certified member mark, and believes that the organization and the MACH architecture are entering a new phase. The MACH approach has been gaining traction among vendors and has increasing appeal to enterprise customers. “This also means that the audience profile of the MACH community and buyers is starting to shift a bit from developers to more business-focused stakeholders,” said Demeny. ”As a result, the Alliance is producing more work around interoperability understanding and standards in order to help these newer stakeholders understand and navigate the landscape.” Regardless of what tech stack developers and organizations choose, the evolution of the Jamstack and the MACH architecture are providing more options and flexibility for developers. 


The Three As of Building A+ Platforms: Acceleration, Autonomy, and Accountability

If the why is about creating value for the business, the what is all about driving velocity for your users, bringing delight to your users, and making your users awesome at what they do. This requires bringing a product mindset to building a platform. ... This is where I found it very useful to think in terms of the Double Diamond framework, where the first diamond is about product discovery and problem definition and the second is about building a solution. While in the first diamond you can do divergent thinking and ideation, either widely or deeply, the second diamond allows for action-oriented, focused thinking that converges into developing and delivering the solution. ... Platforms cannot be shaky - solid fundamentals (Reliability, Security, Privacy, Compliance, disruption) and operational excellence are tablestakes, not a nice-to-have. Our platforms have to be stable. In our case, we decided to put a stop to all feature delivery for about a quarter, did a methodical analysis of all the failures that led to the massive drop in deploy rates, and focused on crucial reliability efforts until we brought this metric back up to 99%+.


Training LLMs: Questions Rise Over AI Auto Opt-In by Vendors

"Organizations who use these technologies must be clear with their users about how their information will be processed," said John Edwards, Britain's Information Commissioner, in a speech last week at the New Scientist Emerging Technologies summit in London. "It's the only way that we continue to reap the benefits of AI and emerging technologies." Whether opting in users by default complies with GDPR remains an open question. "It's hard to think how an opt-out option can work for AI training data if personal data is involved," Armstrong said. "Unless the opt-out option is really prominent - for example, clear on-screen warnings; burying it in the terms and conditions won't be enough - that's unlikely to satisfy GDPR's transparency requirements." Clear answers remain potentially forthcoming. "Many privacy leaders have been grappling with questions around topics such as transparency, purpose limitation and grounds to process in relation to the use of personal data in the development and use of AI," said law firm Skadden, Arps, Slate, Meagher & Flom LLP, in a response to a request from the U.K. government to domestic regulators to detail their approach to AI. 


Data Owner vs. Data Steward: What’s the Difference?

Data owners (also called stakeholders) are often senior leaders or bosses within the organization, who have taken responsibility for managing the data in their specific department or business area. For instance, the director of marketing or the head of production are often data owners because the data used by their staff is critical to their operations. It is a position that requires both maturity and experience. Data owners are also responsible for implementing the security measures necessary for protecting the data they own – encryption, firewalls, access controls, etc. The data steward, on the other hand, is responsible for managing the organization’s overall Data Governance policies, monitoring compliance, and ensuring the data is of high quality. They also oversee the staff, as a form of the data police, to ensure they are following the guidelines that support high-quality data. ... Data stewards can offer valuable recommendations and insights to data owners, and vice versa. Regular meetings and collaboration between the data steward and data owners are necessary for successful Data Governance and management.



Quote for the day:

"Pursue one great decisive aim with force and determination." -- Carl Von Clause Witz

Daily Tech Digest - May 18, 2024

AI imperatives for modern talent acquisition

In talent acquisition, the journey ahead promises to be tougher than ever. Recruiters face a paradigm shift, moving beyond traditional notions of filling vacancies to addressing broader business challenges. The days of simply sourcing candidates are long gone; today's TA professionals must navigate complexities ranging from upskilling and reskilling to mobility and contracting. ... At the heart of it lies a structural shift reshaping the global workforce. Demographic trends, such as declining birth rates, paint a sobering picture of a world where there simply aren't enough people to fill available roles. This demographic drought isn't limited to a single region; it's a global phenomenon with far-reaching implications. Compounding this challenge is the changing nature of careers. No longer tethered to a single company, employees are increasingly empowered to seek out opportunities that align with their aspirations and values. This has profound implications for talent retention and development, necessitating a shift towards systemic HR strategies that prioritise upskilling, mobility, and employee experience.


Ineffective scaled agile: How to ensure agile delivers in complex systems

When developing a complex system it’s impossible to uncover every challenge even with the most in-depth upfront analysis. One way of dealing with this is by implementing governance that emphasizes incorporating customer feedback, active leadership engagement and responding to changes and learnings. Another challenge can arise when teams begin to embrace working autonomously. They start implementing local optimizations which can lead to inefficiencies. The key is that the governance approach should make sure that the overall work is broken down into value increments per domain and then broken down further into value increments per team in regular time intervals. This creates a shared sense of purpose across teams and guides them towards the same goal. Progress can then be tracked using the working system as the primary measure of progress. Those responsible for steering the overall program need to facilitate feedback and prioritization discussions, and should encourage the leadership to adapt to internal insights or changes in the external environment.


How to navigate your way to stronger cyber resilience

If an organization doesn’t have a plan for what to do if a security incident takes place, they risk finding themselves in the precarious position of not knowing how to react to events, and consequently doing nothing or the wrong thing. The report also shows that just over a third of the smaller companies worry that senior management doesn’t see cyberattacks as a significant risk. How can they get greater buy-in from their management team on the importance of cyber risks? It’s important to understand that this is not a question of management failure. It is hard for business leaders to engage with or care about something they don’t fully understand. The onus is on security professionals to speak in a language that business leaders understand. They need to be storytellers and be able to explain how to protect brand reputation through proactive, multi-faceted defense programs. Every business leader understands the concept of risk. If in doubt, present cybersecurity threats, challenges, and opportunities in terms of how they relate to business risk.


DDoS attacks: Definition, examples, and techniques

DDoS botnets are the core of any DDoS attack. A botnet consists of hundreds or thousands of machines, called zombies or bots, that a malicious hacker has gained control over. The attackers will harvest these systems by identifying vulnerable systems that they can infect with malware through phishing attacks, malvertising attacks, and other mass infection techniques. The infected machines can range from ordinary home or office PCs to DDoS devices—the Mirai botnet famously marshalled an army of hacked CCTV cameras—and their owners almost certainly don’t know they’ve been compromised, as they continue to function normally in most respects. The infected machines await a remote command from a so-called command-and-control server, which serves as a command center for the attack and is often itself a hacked machine. Once unleashed, the bots all attempt to access some resource or service that the victim makes available online. Individually, the requests and network traffic directed by each bot towards the victim would be harmless and normal. 


7 ways to use AI in IT disaster recovery

The integration of AI into IT disaster recovery is not just a trendy addition; it's a significant enhancement that can lead to quicker response times, reduced downtime and overall improved business continuity. By proactively identifying risks, optimizing resources and continuously learning from past incidents, AI offers a forward-thinking approach to disaster recovery that could be the difference between a minor IT hiccup and a significant business disruption. ... A significant portion of IT disasters are due to cyberthreats. AI and machine learning can help mitigate these issues by continuously monitoring network traffic, identifying potential threats and taking immediate action to mitigate risks. Most new cybersecurity businesses are using AI to learn about emerging threats. They also use AI to look at system anomalies and block questionable activity. ... AI can optimize the use of available resources, ensuring that critical functions receive the necessary resources first. This optimization can greatly increase the efficiency of the recovery process and help organizations working with limited resources.


Underwater datacenters could sink to sound wave sabotage

In a paper available on the arXiv open-access repository, the researchers detail how sound at a resonant frequency of the hard disk drives (HDDs) deployed in submerged enclosures can cause throughput reduction and even application crashing. HDDs are still widely used in datacenters, despite their obituary having been written many times, and are typically paired with flash-based SSDs. The researchers focused on hybrid and full-HDD architectures to evaluate the impact of acoustic attacks. The researchers found that sound at the right resonance frequency would induce vibrations in the read-write head and platter of the disks by vibration propagation, proportional to the acoustic pressure, or intensity of the sound. This affects the disk's read/write performance. For the tests, a Supermicro rack server configured with a RAID 5 storage array was placed inside a metal enclosure in two scenarios; an indoor laboratory water tank and an open-water testing facility, which was actually a lake on the Florida University campus. Sound was generated from an underwater speaker.


Agile Design, Lasting Impact: Building Data Centers for the AI Era

While there is a clear need for more data centers, the development timeline of building new, modern data centers incorporating these technologies and regulatory adaptations is currently between three to five years (more in some cases). And not just that, the fast pace at which technology is evolving means manufacturers are likely to face the need to rethink strategy and innovation mid-build to accommodate further advancements. ... This is a pivotal moment for our industry and what’s built today could influence what’s possible tomorrow. We’ve had successful adaptations before, but due to the current pace of evolution, future builds need to be able to accommodate retrofits to ensure they remain fit for purpose. It's crucial to strike a balance between meeting demand, adhering to regulations, and designing for adaptability and durability to stay ahead. We might see a rise in smaller, colocation data centers offering flexibility, reduced latency, and cost savings. At the same time, medium players could evolve into hyperscalers, with the right vision to build something suitable to exist in the next hype cycle.


Quantum internet inches closer: Qubits sent 22 miles via fiber optic cable

Even as the biggest names in the tech industry race to build fault-tolerant quantum computers, the transition from binary to quantum can only be completed with a reliable internet connection to transmit the data. Unlike binary bits transported as light signals inside a fiber optic cable that can be read, amplified, and transmitted over long distances, quantum bits (qubits) are fragile, and even attempting to read them changes their state. ... Researchers in the Netherlands, China, and the US separately demonstrated how qubits could be stored in “quantum memory” and transmitted over the fiber optic network. Ronald Hanson and his team at the Delft University of Technology in the Netherlands encoded qubits in the electrons of nitrogen atoms and nuclear states of carbon atoms of the small diamond crystals that housed them. An optical fiber cable traveled 25 miles from the university to another laboratory in Hague to establish a link with similarly embedded nitrogen atoms in diamond crystals.


Cyber resilience: Safeguarding your enterprise in a rapidly changing world

In an era defined by pervasive digital connectivity and ever-evolving threats, cyber resilience has become a crucial pillar of survival and success for modern-day enterprises. It represents an organisation’s capacity to not just withstand and recover from cyberattacks but also to adapt, learn, and thrive in the face of relentless and unpredictable digital challenges. ... Due to the crippling effects a cyberattack can have on a nation, governments and regulatory bodies are also working to develop guidelines and standards which encourage organisations to embrace cyber resilience. For instance, the European Parliament recently passed the European Cyber Resilience Act (CRA), a legal framework to describe the cybersecurity requirements for hardware and software products placed on the European market. It aims to ensure manufacturers take security seriously throughout a product’s lifecycle. In other regions, such as India, where cybersecurity adoption is comparatively evolving, the onus falls on industry leaders to work with governmental bodies and other enterprises to encourage the development and adoption of similar obligations. 


How to Build Large Scale Cyber-Physical Systems

There are several challenges in building hardware-reliant cyber-physical systems, such as hardware lead times, organisational structure, common language, system decomposition, cross-team communication, alignment, and culture. People engaged in the development of large-scale safety-critical systems need line of sight to business objectives, Yeman said. Each team should be able to connect their daily work to those objectives. Yeman suggested communicating the objectives through the intent and goals of the system as opposed to specific tasks. An example of an intent-based system objective would be to ensure the system can communicate to military platforms securely as opposed to specifically defining that the system must communicate via link-16, she added. Yeman advised breaking the system problem down into smaller solvable problems. With each of those problems resolve what is known first and then resolve the unknown through a series of experiments, she said. This approach allows you to iteratively and incrementally build a continuously validated solution.



Quote for the day:

"Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - May 17, 2024

Cloud Computing at the Edge: From Evolution to Disruption

While hybrid cloud solutions provide a cloud experience from an operational point of view, they are not supporting the flexible consumption-based pricing model of the cloud. Organizations must purchase or lease IT resources for on-premises deployment up front from the cloud provider rather than on demand. And while they can scale up, they can’t scale down and reduce cost if their usage is reduced. Moreover, the fact that the local extensions can only communicate with the centralized cloud and can’t communicate among them is a major limitation to the scalability of this model. ... Scalable multicloud architectures offer a robust solution to address IT services at the edge of the network. They provide a comprehensive cloud experience at multiple locations. Proximity to users enhances performance, particularly for localized services and applications, by reducing latency and improving responsiveness. Interconnected clouds facilitate seamless data exchange and collaboration, supporting innovation and agility within organizations. This approach enables data sovereignty and mitigates the risk of downtime and data loss by providing redundancy and resilience across multiple clouds. 


Colorado Enacts BIPA-Like Regulatory Obligations (and More)

HB 1130 applies to “biometric identifiers” and “biometric data.” A biometric identifier is defined as “data generated by the technological processing, measurement, or analysis of a consumer’s biological, physical, or behavioral characteristics, which can be processed for the purpose of uniquely identifying an individual.” Biometric data is defined as “one or more biometric identifiers that are used or intended to be used, singly or in combination with each other or with any other personal data, for identification purposes.” Together, the scope of covered data under HB 1130 is much broader as compared to BIPA, Texas’s Capture or Use of Biometric Identifiers Act (CUBI), and similar biometrics laws currently in effect. This aspect of HB 1130 not only increases the extent of legal risk and liability exposure that companies will face but will also create significant complexities and challenges in ascertaining whether organizational biometric data processing activities fall under the scope of HB 1130. Importantly, the combination of HB 1130’s broad applicability and its expansive definitions of biometric identifiers/data will subject controllers to compliance even where only an amount of biometric data is processed, and no actual biometric identification or authentication is performed.


Adaptive Data Governance: What, Why, How

Adaptive Data Governance has a framework that balances responses to changing business conditions and meets the requirements for privacy and control. Keys to this structure lie in the data culture and alignment, as described in the “Key Components for an Adaptive DG Framework.” To start, define agile Governance principles that work best with the business culture. Getting this right can prove challenging, because businesspeople may fear losing control of data accessibility, having a diminished data role, or because they find data decision-making challenging. It helps to start with a data maturity model, to understand how well staff values data, find the gaps, and determine the next steps. From there, establish accountability rights through clear roles and responsibilities. The decision-making processes and resources need to be well-defined, especially what to do around time-sensitive and critical issues, in what general contexts, and how and when to escalate them. It helps to include a multiple combination of governance styles that can be applied as needed to the governance situation at hand and can respond to change. DATAVERSITY’s DG definition describes the different governance types.


Distributed Systems: Common Pitfalls and Complexity

Concurrency represents one of the most intricate challenges in distributed systems. Concurrency implies the simultaneous occurrence of multiple computations. Consequently, what occurs when an attempt is made to update the account balance simultaneously from disparate operations? In the absence of a defensive mechanism, it is highly probable that race conditions will ensue, which will inevitably result in the loss of writes and data inconsistency. In this example, two operations are attempting to update the account balance concurrently. Since they are running in parallel, the last one to complete wins, resulting in a significant issue. ... The CAP Theorem posits that any distributed data store can only satisfy two of the three guarantees. However, since network unreliability is not a factor that can be significantly influenced, in the case of network partitions, the only viable option is to choose between availability or consistency. Consider the scenario in which two clients read from different nodes: one from the primary node and another from the follower. A replication is configured to update followers after the leader has been changed. However, what happens if, for some reason, the leader stops responding?


Navigating Three Decades of the Cloud

Today’s organizations have recognized the importance of a strategic, scalable, and incremental approach to their cloud migration efforts. While a 'big-bang' approach may seem attractive, successful organizations are opting for a more phased and purpose-driven approach to enterprise-scale cloud migrations. Moving to the cloud isn't as simple as flipping a switch. Well-thought-out strategic planning, coupled with a clear execution roadmap, is critical to success. Now that technology underpins nearly every aspect of the modern enterprise, it's critical to understand the impacts and implications of modernization across operations, management, finance, IT, and beyond. ... Although the cloud offers unparalleled flexibility and scalability, the specter of rising costs prompts many enterprises to reassess their cloud strategies. As the financial implications of cloud usage become more apparent, organizations find themselves at a crossroads, carefully weighing the benefits against the expenses and reevaluating which workloads to retain on-premises or migrate to private cloud environments.


Are Banks Suffering From ‘Innovation Fatigue’ at the Worst Possible Moment?

The report underscores the importance of aligning performance measurement with strategic objectives. While the metrics provided offer valuable insights into industry benchmarks, relying solely on the data without the context of a well-defined strategy can lead to misguided decisions. To strike the right balance, the report recommends that financial institutions develop a comprehensive digital banking metrics framework. This framework should encompass a range of metrics, including investments, adoption, usage, efficiency, and output, ensuring a holistic understanding of digital banking performance and enabling data-driven decision-making. In conclusion, the 2024 Digital Banking Performance Metrics report serves as a wake-up call for the industry. While financial institutions have made significant investments in digital banking capabilities, the strategic impact of these investments remains uncertain. To navigate the evolving digital landscape successfully, institutions must embrace emerging technologies like AI, reignite their innovation drive, and establish robust performance measurement frameworks aligned with their strategic objectives.


How Technical Debt Can Impact Innovation and How to Fix It

Rafalin said enterprises are facing what he refers to as boiling frog syndrome when it comes to technical debt. "Everyone knows it's an issue, and the clock is ticking, but organizations continue to prioritize releasing new features over maintaining a solid architecture," he said. "With the rise of AI, developers are becoming more and more productive, but this also means they will generate more technical debt. It's inevitable." In Rafalin's view, addressing technical debt requires a strategic vision. While quick patches may save companies in the short term, eventually technical debt will manifest in more outages and vulnerabilities. Technical debt needs to be addressed constantly and proactively as part of the software development life cycle, he said. For organizations just trying to get a quick handle on technical debt, where do they start and what should they do? According to Rafalin, the reality is technical debt that's been accumulating for a long time has no quick fix, especially architectural technical debt. There is no single line of code fix or framework upgrade that solves these architectural issues.


The automation paradox: Identifying when your system is holding you back

A company implementing an automation solution with the promise of significant cost savings sees minimal improvement in their bottom line after months of use. This points towards an automation solution that fails to deliver a significant return on investment (ROI). Basic automation solutions often fall short of their promises because they focus on isolated tasks without considering the bigger picture. Advanced automation solutions with features like intelligent process mining and CPA go beyond basic data extraction and task automation. These features unlock significant ROI potential by identifying inefficiencies in existing workflows and automating tasks that deliver the greatest impact. Beyond just saving Full-Time Equivalent (FTE) costs, cognitive automation provides additional benefits to organizations. ... Effective automation is not a one-time fix; it’s a continuous journey. By recognizing the signs of a plateauing automation strategy and seeking out next-generation solutions, enterprises can break free from the automation paradox. The future belongs to a collaborative approach where humans and intelligent automation work in tandem.


Should You Buy Cyber Insurance in 2024? Pros & Cons

One of the primary challenges of cyber insurance is the rapidly changing nature of cyber threats. As hackers become more sophisticated and new attack vectors emerge, it becomes challenging for insurers to assess and quantify the potential risks accurately. This can lead to coverage gaps and inadequate protection for businesses, as policies may not adequately address emerging cyber threats. Another limitation of cyber insurance is the lack of standardization across policies and coverage options. Each insurer may offer different terms, conditions, and exclusions, making it difficult for businesses to compare policies and make informed decisions. ... Cyber insurance policies typically focus on financial losses resulting from cyber incidents, such as business interruption, data restoration costs, and legal expenses. However, non-monetary losses like reputational damage, loss of customer trust, and diminished brand value may not always be adequately covered. These intangible losses can have far-reaching consequences for businesses, and their limited coverage can expose them to significant risks.


The UK’s digital identity crisis

The impact of Aadhaar in India cannot be underestimated; as part of a broader digital infrastructure, it arguably makes India a global leader in digital identity. ... In stark contrast, the UK amassed a paltry 8.6 million users for its GOV.UK Verify scheme before it was shut down in 2023 due to a variety of issues. Its replacement, GOV.UK One Login, has yet to be integrated across all government services, which will be key to adoption. It is fair to say that the UK currently has one of the lowest rates of digital identity adoption globally. As a country, there are a number of reasons why this matters:Missed economic opportunities: Digital identities can streamline business operations, reduce fraud, and enhance customer experiences, driving economic growth. Slow adoption means the UK may lag behind in this area. Inefficiencies in public services: Effective digital identity systems can significantly reduce bureaucratic inefficiencies, saving time and resources for both citizens and the government. The UK’s slower adoption hampers these potential efficiencies. Lag in innovation: Countries leading in digital identity are often at the forefront of broader digital innovation.



Quote for the day:

"You've got to get up every morning with determination if you're going to go to bed with satisfaction. " -- George Lorimer

Daily Tech Digest - May 16, 2024

Cultivating cognitive liberty in the age of generative AI

Cognitive liberty is a pivotal component of human flourishing that has been overlooked by traditional theories of liberty—primarily because we have taken for granted that our brains and mental experiences are under our own control. This assumption is being replaced with more nuanced understandings of the human brain and its interaction with our environment, our interactions with others, and our interdependence with technology. Cultivating cognitive liberty in the digital age will become increasingly vital to enable humans to exercise individual agency, nurture human creativity, discern fact and fiction, and reclaim our critical thinking skills amid unprecedented cognitive opportunities and risks. Generative AI tools like GPT-4 pose new challenges to cognitive liberty, including the potential to interfere with and manipulate our mental experiences. They can exacerbate biases and distortions that undermine the integrity and reliability of the information we consume, in turn influencing our beliefs, judgments, and decisions. 


Smart homes, smart choices: How innovation is redefining home furnishing

Most notably, the advent of innovations has made shopping for furniture online a far more enjoyable experience. It begins with options. Today, online furniture websites provide customers with a vastly larger catalog of choices than a brick-and-mortar school could imagine since there are no physical constraints in the digital realm. But vast selections alone are just the beginning. That’s why innovations like AR and VR are so important. Once shoppers identify potential items, AR and VR allow them to view each piece online. They can examine not just static images but pictures from all sides and angles. They can personalize it to fit their style and home. ... First, they understand various key factors, including the origin of the materials being used, how they were made, the labor practices involved, potential environmental impacts, and more. For Wayfair, we are leading the way by including sustainability certifications on approved items as part of our Shop Sustainably commitment. This shift is part of a larger movement called conscious consumerism, where purchasing decisions are made based on those that have positive social, economic, and environmental impacts. 


A Guide to Model Composition

At its core, model composition is a strategy in machine learning that combines multiple models to solve a complex problem that cannot be easily addressed by a single model. This approach leverages the strengths of each individual model, providing more nuanced analyses and improved accuracy. Model composition can be seen as assembling a team of experts, where each member brings specialized knowledge and skills to the table, working together to achieve a common goal. Many real-world problems are too complicated for a one-size-fits-all model. By orchestrating multiple models, each trained to handle specific aspects of a problem or data type, we can create a more comprehensive and effective solution. There are several ways to implement model composition, including but not limited to: Sequential processing: Models are arranged in a pipeline, where the output of one model serves as the input for the next. ... Parallel processing: Multiple models run in parallel, each processing the same input independently. Their outputs are then combined, either by averaging, voting or through a more complex aggregation model, to produce a final result. 


Securing IoT devices is a challenging yet crucial task for CIOs: Silicon Labs CEO

Likewise, as IoT deployments expand, we’ll need scalable infrastructure and solutions capable of accommodating growing device numbers and data volumes. Many countries have their own nuanced regulatory compliance schemes, which add another layer of complexity, especially for data privacy and security regulations. Notably, in India, cost considerations, including initial deployment costs and ongoing maintenance expenses, can be a barrier to adoption, necessitating an understanding of return on investment. ... Silicon Labs has played a key role in advancing IoT and AI adoption through collaborations with industry and academia, including a recent partnership with IIIT-H in India. In 2022, we launched India's first campus-wide Wi-SUN network at the IIIT-H Smart City Living Lab, enabling remote monitoring and control of campus street lamps. This network provides students and researchers with hands-on experience in developing smart city solutions. Silicon Labs also supports STEM education initiatives like Code2College to inspire innovation in the IoT and AI fields.


Cyber resilience: A business imperative CISOs must get right

Often, organizations have more capabilities than they realize, but these resources can be scattered throughout different departments. And each group responsible for establishing cyber resilience might lack full visibility into the existing capabilities within the organization. “Network and security operations have an incredible wealth of intelligence that others would benefit from,” Daniels says. Many companies are integrating cyber resilience into their enterprise risk management processes. They have started taking proactive measures to identify vulnerabilities, assess risks, and implement appropriate controls. “This includes exposure assessment, regular validation such as penetration testing, and continuous monitoring to detect and respond to threats in real-time,” says Angela Zhao, director analyst at Gartner. ... The rise of generative AI as a tool for hackers further complicates organization’s resilience strategies. That’s because generative AI equips even low-skilled individuals with the means to execute complex cyber attacks. As a result, the frequency and severity of attacks might increase, forcing businesses to up their game. 


Is an open-source AI vulnerability next?

The challenges within the AI supply chain mirror those of the broader software supply chain, with added complexity when integrating large language models (LLMs) or machine learning (ML) models into organizational frameworks. For instance, consider a scenario where a financial institution seeks to leverage AI models for loan risk assessment. This application demands meticulous scrutiny of the AI model’s software supply chain and training data origins to ensure compliance with regulatory standards, such as prohibiting protected categories in loan approval processes. To illustrate, let’s examine how a bank integrates AI models into its loan risk assessment procedures. Regulations mandate strict adherence to loan approval criteria, forbidding the use of race, sex, national origin, and other demographics as determining factors. Thus, the bank must consider and assess the AI model’s software and training data supply chain to prevent biases that could lead to legal or regulatory complications. This issue extends beyond individual organizations. The broader AI technology ecosystem faces concerning trends. 


Google’s call-scanning AI could dial up censorship by default, privacy experts warn

Google’s demo of the call scam-detection feature, which the tech giant said would be built into a future version of its Android OS — estimated to run on some three-quarters of the world’s smartphones — is powered by Gemini Nano, the smallest of its current generation of AI models meant to run entirely on-device. This is essentially client-side scanning: A nascent technology that’s generated huge controversy in recent years in relation to efforts to detect child sexual abuse material (CSAM) or even grooming activity on messaging platforms. ... Cryptography expert Matthew Green, a professor at Johns Hopkins, also took to X to raise the alarm. “In the future, AI models will run inference on your texts and voice calls to detect and report illicit behavior,” he warned. “To get your data to pass through service providers, you’ll need to attach a zero-knowledge proof that scanning was conducted. This will block open clients.” Green suggested this dystopian future of censorship by default is only a few years out from being technically possible. “We’re a little ways from this tech being quite efficient enough to realize, but only a few years. A decade at most,” he suggested.


Data strategy? What data strategy?

A recent survey of UKI SAP users found that only 12 percent of respondents had a data strategy that covers their entire organization - these are people who are very likely to be embarking on tricky migrations to S/4HANA. Without properly understanding and governing the data they’re migrating, they’re en route to some serious difficulties. That’s because, more often than not, when a digital transformation project is on the cards, data takes a back seat. In the flurry of deadlines, testing, and troubleshooting, it feels so much more important to get the infrastructure in place and deal with the data later. The single goal is switching on the new system. Fixing the data flaws that caused so many headaches with the old solution is rarely top of the list. But those flaws and headaches are telling you something: your data needs serious attention. Unless you take action, those data silos that slow down decision-making and the data management challenges that are a blocker to innovation will follow you to your new infrastructure.


Designing and developing APIs with TypeSpec

TypeSpec is in wide use inside Microsoft, having spread from its original home in the Azure SDK team to the Microsoft Graph team, among others. Having two of Microsoft’s largest and most important API teams using TypeSpec is a good sign for the rest of us, as it both shows confidence in the toolkit and ensures that the underlying open-source project has an active development community. Certainly, the open-source project, hosted on GitHub, is very active. It recently released TypeSpec 0.56 and has received over 2000 commits. Most of the code is written in TypeScript and compiled to JavaScript so it runs on most development systems. TypeSpec is cross-platform and will run anywhere Node.js runs. Installation is done via npm. While you can use any programmer’s editor to write TypeSpec code, the team recommends using Visual Studio Code, as a TypeSpec extension for VS Code provides a language server and support for common environment variables. This behaves like most VS Code language extensions, giving you diagnostic tools, syntax highlights, and IntelliSense code completion. 


What’s holding CTOs back?

“Obviously, technology strategy and business strategy have to be ultimately driven by the vision of the organization,’’ Jones says, “but it was surprising that over a third of CTOs we surveyed felt they weren’t getting clear vision and guidance.” The CTO role also means different things in different organizations. “The CTO role is so diverse and spans everything from a CTO who works for the CIO and is making the organization more efficient, all the way to creating visibility for the future and transformations,’’ Jones says. ... Plexus Worldwide’s McIntosh says internal politics and some level of bureaucracy are unavoidable for CTOs seeking to push forward technology initiatives. “Navigating and managing this within an organization requires a balance of experience and influence to lessen any potential negative impact,’’ he says. Experienced leaders who have been with a company a long time “are often skilled at understanding the intricate web of relationships, power dynamics, and competing interests that shape internal politics and bureaucratic hurdles,’’ McIntosh says. 



Quote for the day:

"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer