Daily Tech Digest - May 28, 2024

Partitioning an LLM between cloud and edge

By partitioning LLMs, we achieve a scalable architecture in which edge devices handle lightweight, real-time tasks while the heavy lifting is offloaded to the cloud. For example, say we are running medical scanning devices that exist worldwide. AI-driven image processing and analysis is core to the value of those devices; however, if we’re shipping huge images back to some central computing platform for diagnostics, that won’t be optimal. Network latency will delay some of the processing, and if the network is somehow out, which it may be in several rural areas, then you’re out of business. ... The first step involves evaluating the LLM and the AI toolkits and determining which components can be effectively run on the edge. This typically includes lightweight models or specific layers of a larger model that perform inference tasks. Complex training and fine-tuning operations remain in the cloud or other eternalized systems. Edge systems can preprocess raw data to reduce its volume and complexity before sending it to the cloud or processing it using its LLM.

How ISRO fosters a culture of innovation

As people move up the corporate totem pole their attention to detail gives way to big-picture thinking, and rightly so. You can’t look beyond and yet mind your every step on the way to an uncharted terrain. Yet when it comes to research and development, especially high-risk, high-impact projects, there is hardly any trade-off between thinking big and thinking in detail. You must do both. For instance, in the inaugural session of my last workshop, one of the senior directors was invited and the first thing he noticed was the mistake in the session duration. ... Now imagine this situation in a corporate context. How likely is the boss to call out a rather silly mistake? It was innocuous for all practical purposes. Most won’t point it out, let alone address it immediately. But not at ISRO.  ... Here’s the interesting thing. One of the participants was incessantly quizzing me, bordering on a challenge, and everyone was nonchalant about it. In a typical corporate milieu, such people would be shunned or would be asked to shut up. But not here. We had a volley of arguments, and people around seemed to enjoy it and encourage it. They were not only okay with varied points of view but also protective of it. 

GoDaddy has 50 large language models; its CTO explains why

“What we’ve done is built a common gateway that talks to all the various large language models on the backend, and currently we support more than 50 different models, whether they’re for images, text or chat, or whatnot. ... “Obviously, this space is accelerating superfast. A year ago, we had zero LLMs and today we have 50 LLMs. That gives you some indication of just how fast this is moving. Different models will have different attributes and that’s something we’ll have to continue to monitor. But by having that mechanism we can monitor with and control what we send and what we receive, we believe we can better manage that.” ... “In some ways, experiments that aren’t successful are some of the most interesting ones, because you learn what doesn’t work and that forces you to ask follow-up questions about what will work and to look at things differently. As teams saw the results of these experiments and saw the impact on customers, it’s really engaged them to spend more time with the technology and focus on customer outcomes.”

How to combat alert fatigue in cybersecurity

Alert fatigue is the result of several related factors. First, today’s security tools generate an incredible volume of event data. This makes it difficult for security practitioners to distinguish between background noise and serious threats. Second, many systems are prone to false positives, which are triggered either by harmless activity or by overly sensitive anomaly thresholds. This can desensitize defenders who may end up missing important attack signals. The third factor contributing to alert fatigue is the lack of clear prioritization. The systems generating these alerts often don’t have mechanisms that triage and prioritize the events. This can lead to paralyzing inaction because the practitioners don’t know where to begin. Finally, when alert records or logs do not contain sufficient evidence and response guidance, defenders are unsure of the next actionable steps. This confusion wastes valuable time and contributes to frustration and fatigue. ... The elements of the “SOC visibility triad” I mentioned earlier – NDR, EDR, and SIEM are among the critical new technologies that can help.

Driving buy-in: How CIOs get hesitant workforces to adopt AI

If willingness and skill are the two main dimensions that influence hesitancy toward AI, employees who question whether taking the time to learn the technology is worth the effort are at the intersection. These employees often believe the AI learning curve is too steep to justify embarking on in the first place, he notes. “People perceive that AI is something complex, probably because of all of these movies. They worry: Will they have time and effort to learn these new skills and to adapt to these new systems?” Jaksic says. This challenge is not unique to AI, he adds. “We all prefer familiar ways of working, and we don’t like to disrupt our established day-to-day activities,” he says. Perhaps the best inroads then is to show that learning enough about AI to use it productively does not require a monumental investment. To this end, Jaksic has structured a formal program at KEO for AI education in bite-size segments. The program, known as Summer of Innovation, is organized around lunchtime sessions taught by senior leaders around high-level AI concepts. 

Taking Gen AI mainstream with next-level automation

Gen AI needs to be accountable and auditable. It needs to be instructed and learn what information it can retrieve. Combining it with IA serves as the linchpin of effective data governance, enhancing the accuracy, security, and accountability of data throughout its lifecycle. Put simply, by wrapping Gen AI with IA businesses have greater control of data and automated workflows, managing how it is processed, secured – from unauthorized changes – and stored. It is this ‘process wrapper’ concept that will allow organizations to deploy Gen AI effectively and responsibly. Adoption and transparency of Gen AI – now – is imperative, as innovation continues to grow at pace. The past 12 months have seen significant innovations in language learning models (LLMs) and Gen AI to simplify automations that tackle complex and hard-to-automate processes. ... Before implementing any sort of new automation technology, organizations must establish use cases unique to their business and undertake risk management assessments to avoid potential noncompliance, data breaches and other serious issues.

Third-party software supply chain threats continue to plague CISOs

As software gets more complex with more dependent components, it quickly becomes difficult to detect coding errors, whether they are inadvertent or added for malicious purposes as attackers try to hide their malware. “A smart attacker would just make their attack look like an inadvertent vulnerability, thereby creating extremely plausible deniability,” Williams says. ... “No single developer should be able to check in code without another developer reviewing and approving the changes,” the agency wrote in their report. This was one of the problems with the XZ Utils compromise, where a single developer gained the trust of the team and was able to make modifications on their own. One method is to combine a traditional third-party risk management program with specialized consultants that can seek out and eliminate these vulnerabilities, such as the joint effort by PwC and ReversingLabs’ automated tools. The open-source community also isn’t just standing still. One solution is a tool introduced earlier this month by the Open Source Security Foundation called Siren. 

Who is looking out for your data? Security in an era of wide-spread breaches

Beyond organizations introducing the technology behind closed doors to keep data safe, the interest in biometrics smartcards shows that consumers also want to see improved protection play out in their physical transactions and finance management. This paradigm shift reflects not only a desire for heightened protection but also an acknowledgement of the limitations of traditional authentication methods. Attributing access to a fingerprint or facial recognition affirms to that person, in that moment, that their credentials are unique, and therefore that the data inside is safe. Encryption of fingerprint data within the card itself further ensures complete confidence in the solution. The encryption of personal identity data only strengthens this defense, ensuring that sensitive information remains inaccessible to unauthorized parties. These smartcards effectively mitigate the vulnerabilities associated with centralized databases. Biometric smart cards also change the dynamic of data storage. Rather than housing biometric credentials in centralized databases, where targets are also gathered in one location; smartcards sidestep that risk.

The Role of AI in Developing Green Data Centers

Green data centers, powered by AI technologies, are at the forefront of revolutionizing the digital infrastructure landscape with their significantly reduced environmental impact. These advanced facilities leverage AI to optimize energy consumption and cooling systems, leading to a substantial reduction in energy consumption and carbon footprint. This not only reduces greenhouse gas emissions but also paves the way for more sustainable operational practices within the IT industry. Furthermore, sustainability initiatives integral to green data centers extend beyond energy efficiency. They encompass the use of renewable energy sources such as wind, solar, and hydroelectric power to further diminish the reliance on fossil fuels. ... AI-driven solutions can continuously monitor and analyze vast amounts of data regarding a data center’s operational parameters, including temperature fluctuations, server loads, and cooling system performance. By leveraging predictive analytics and machine learning algorithms, AI can anticipate potential inefficiencies or malfunctions before they escalate into more significant issues that could lead to excessive power use.

Don't Expect Cybersecurity 'Magic' From GPT-4o, Experts Warn

Despite the fresh capabilities, don't expect the model to fundamentally change how a gen AI tool helps either attackers or defenders, said cybersecurity expert Jeff Williams. "We already have imperfect attackers and defenders. What we lack is visibility into our technology and processes to make better judgments," Williams, the CTO at Contrast Security, told Information Security Media Group. "GPT-4o has the exact same problem. So it will hallucinate non-existent vulnerabilities and attacks as well as blithely ignore real ones." ... Attackers might still gain some minor productivity boosts thanks to GPT-4o's fresh capabilities, including its ability to do multiple things at once, said Daniel Kang, a machine learning research scientist who has published several papers on the cybersecurity risks posed by GPT-4. These "multimodal" capabilities could be a boon to attackers who want to craft realistic-looking deep fakes that combine audio and video, he said. The ability to clone voices is one of GPT-4o's new features, although other gen AI models already offered this capability, which experts said can potentially be used to commit fraud by impersonating someone else's identify.

Quote for the day:

"Defeat is not bitter unless you swallow it." -- Joe Clark

Daily Tech Digest - May 27, 2024

10 big devops mistakes and how to avoid them

“One of the significant challenges with devops is ensuring seamless communication and collaboration between development and operations teams,” says Lawrence Guyot, president of IT services provider Empowerment through Technology & Education (ETTE). ... Ensuring the security of the software supply chain in a devops environment can be challenging. “The speed at which devops teams operate can sometimes overlook essential security checks,” Guyot says. “At ETTE, we addressed this by integrating automated security tools directly into our CI/CD pipeline, conducting real-time security assessments at every stage of development.” This integration not only helped the firm identify vulnerabilities early, but also ensured that security practices kept pace with rapid deployment cycles, Guyot says. ... “Aligning devops with business goals can be quite the hurdle,” says Remon Elsayea, president of TechTrone IT Services, an IT solutions provider for small and mid-sized businesses. “It often seems like the rapid pace of devops initiatives can outstrip the alignment with broader business objectives, leading to misaligned priorities,” Elsayea says.

Why We Need to Get a Handle on AI

A recent World Economic Forum report also found a widening cyber inequity, which is accelerating the profound impact of emerging technologies. The path forward therefore demands strategic thinking, concerted action, and a steadfast commitment to cyber resilience. Again, this isn’t new. Organizations of all sizes and maturity levels have often struggled to maintain the central tenets of organizational cyber resilience. At the end of the day, it is much easier to use technology to create malicious attacks than it is to use technology to detect such a wide spectrum of potential attack vectors and vulnerabilities. The modern attack surface is vast and can overwhelm an organization as they determine how to secure it. With this increased complexity and proliferation of new devices and attack vectors, people and organizations have become a bigger vulnerability than ever before. It is often said that humans are the biggest risk when it comes to security and deepfakes can more easily trick people into taking actions that benefit the attackers. Therefore, what questions should security teams be asking to protect their organization?

Demystifying cross-border data transfer compliance for Indian enterprises

The variability of these laws introduces complex compliance issues. As Indian enterprises expand globally, the significance of robust data compliance management escalates. Organizations like ours assist companies worldwide with customized solutions tailored to the complexities of cross-border data transfer compliance. We ensure that businesses not only meet international data protection standards but also enhance their data governance practices through our comprehensive suite of tools. The evolution of India’s data localization policies could significantly influence global digital diplomacy. Moving from strict data localization to permitting certain cross-border data flows aligns India more closely with global digital trade norms, potentially enhancing its relationships with major markets like the US and EU. India is proactively revising its legal frameworks to better address the intricacies of cross-border data transfers within the realm of data privacy, especially for businesses. The forthcoming DPDPA regulations aim to balance the need for data protection with the operational requirements of digital commerce and governance.

Digital ID adoption: Implementation and security concerns

Digital IDs are poised to revolutionize sectors that rely heavily on secure and efficient identity verification. ... “As the Forrester experts note in the study, the complexities and disparities of global implementation across various landscapes highlight the strategic necessity of adopting a hybrid approach to digital IDs. Moreover, there is no single, universally accepted set of global standards for digital IDs that applies across all countries and sectors. Therefore, the large number of companies at the stage of active implementation demonstrates a growing need for frameworks and guidelines that aim to foster interoperability, security, and privacy across different digital ID systems,” said Ihar Kliashchou, CTO at Regula. “The good news is that several international organizations and standards bodies — New Technology Working Group in the International Civil Aviation Organization, the International Organization for Standardization (ISO), etc. — are working towards those standards. This seems to be a case in which slow and steady wins the race,” concluded Kliashchou.

Forrester: Preparing for the era of the AI PC

AI PCs are now disrupting the cloud-only AI model to bring that processing to local devices running any OS. But what is an AI PC exactly? Forrester defines an AI PC as a PC embedded with an AI chip and algorithms specifically designed to improve the experience of AI workloads across the computer processing unit (CPU), graphics processing unit (GPU) and neural processing unit (NPU). ... An AI PC also offers a way to improve the collaboration experience. Dedicated AI chipsets will improve the performance of classic collaboration features, such as background blur and noise, by sharing resources across CPUs, GPUs and NPUs. On-device AI offers the ability to render a much finer distinction between the subject and the blurred background. More importantly, the AI PC will also enable new use cases, such as eye contact correction, portrait blur, auto framing, lighting adjustment and digital avatars. Another benefit of AI chipsets on PCs is that they provide the means to optimise device performance and longevity. Previous AI use cases were feasible on PCs, but they drained the battery quickly. The addition of an NPU will help preserve battery life while employees run sustained AI workloads.

Gartner Reveals 5 Trends That Will Make Software Engineer

Herschmann said that while there is a worry that AI could eliminate coding jobs instead of just enhancing them, that worry is somewhat unfounded. "If anything, we believe there's going to be a need for more developers, which may at first seem a little counterintuitive, but the reality is that we're still in the early stages of all of this," he said. "While generative AI is quite impressive in the beginning, if you dig a little bit deeper, you realize it's shinier than it really is," Herschmann said. So instead of replacing developers, AI will be more of a partner to them. ... Coding is just a small part of a developer's role. There are a lot of other things they need to do, such as keep the environment running, configuration work, and so on. So it makes sense to have a platform engineering team to take some of this work off developers' plates so they can focus on building the product, according to Herschmann. "Along with that though comes a potential scaling effect because you can then provide that same environment and the skills of that team to others as you scale up," he said. 

Beyond blockchain: Unlocking the potential of Directed Acyclic Graphs (DAGs)

DAGs are a type of data structure that uses a topological ordering, allowing for multiple branches that converge but do not loop back on themselves. Imagine a network of interconnected highways where each transaction can follow its own distinct course, branching off and joining forces with other transactions as required. This structure enables simultaneous transactions, eliminating the need for sequential processing, which is a bottleneck in traditional blockchain systems. ... One of the notable challenges of traditional blockchain technology is its scalability. DAGs address this issue by allowing more transactions to be processed in parallel, significantly increasing throughput, a key advantage for real-time applications in commodity trading and supply chain management. DAGs are more energy-efficient than proof-of-work blockchains, as they do not require substantial computational power for intensive mining activities, aligning with global and particularly India’s increasing focus on sustainable technological solutions. But the benefits of DAGs don’t stop here. Imagine a scenario where a shipment of perishable goods is delayed due to unforeseen circumstances, such as adverse weather conditions.

Pioneering the future of personalised experiences and data privacy in the digital age

Zero-party data (ZPD) is at the core of Affinidi's strategy and is crucial for businesses navigating consumer interactions. ZPD refers to information consumers willingly share with companies for specific benefits, such as personalised offers and services. Consider an avid traveller who frequently books trips online. He might share his travel preferences with a travel company, such as favourite destinations, preferred accommodation types, and activity interests. This data allows the company to tailor its offerings precisely to his tastes. For instance, if he loves beach destinations and luxury hotels, the company can send him personalised travel packages featuring exclusive beach resorts with premium amenities. ... As data privacy regulations tighten, businesses must prioritise consented and accurate data sources, reducing legal risks and dependence on external data pools. Trust can be viewed as a currency, altering customers' loyalty and buying decisions. A survey by PWC showed that 33% of customers pay a premium to companies because they trust them. 

Shut the back door: Understanding prompt injection and minimizing risk

You don’t have to be an expert hacker to attempt to misuse an AI agent; you can just try different prompts and see how the system responds. Some of the simplest forms of prompt injection are when users attempt to convince the AI to bypass content restrictions or ignore controls. This is called “jailbreaking.” One of the most famous examples of this came back in 2016, when Microsoft released a prototype Twitter bot that quickly “learned” how to spew racist and sexist comments. More recently, Microsoft Bing (now “Microsoft Co-Pilot) was successfully manipulated into giving away confidential data about its construction. Other threats include data extraction, where users seek to trick the AI into revealing confidential information. Imagine an AI banking support agent that is convinced to give out sensitive customer financial information, or an HR bot that shares employee salary data. And now that AI is being asked to play an increasingly large role in customer service and sales functions, another challenge is emerging. Users may be able to persuade the AI to give out massive discounts or inappropriate refunds. 

Say goodbye to break-and-fix patches

A ‘break-and-fix’ mindset can be necessary in emergency situations, but it can also make things worse. While it can be tempting to view maintenance work as adding little value, failing to address these problems properly will only create future issues as you accumulate tech debt. Fixing those issues will require more resources — time, money, skills — that will undoubtedly hurt your organization. ... Tech debt is one of those “invisible issues” hiding in IT systems. Opting for quick fixes to solve immediate issues, rather than undertaking comprehensive upgrades might seem cost-effective and straightforward at first. However, over time, the accumulation of these patches contributes significantly to tech debt. ... Despite the potential consequences of inadequate and reactive maintenance, adopting a more proactive approach can be challenging for many businesses. Economic pressures and budgetary constraints are forcing leaders to reduce expenses and ‘do more with less’ — this leads to situations where areas not traditionally viewed as value-adding (like maintenance) are deprioritized. This is where managed services can help. 

Quote for the day:

''Smart leaders develop people who develop others, don't waste your time on those who won't help themselves.'' -- John C Maxwell

Daily Tech Digest - May 26, 2024

The modern CISO: Scapegoat or value creator?

To showcase the value of their programs and demonstrate effectiveness, CISOs must establish clear communication and overcome the disconnect between the board and their team. It’s up to the CISO to ensure the board understands the level of cyber risk their organization is facing and what they need to increase the cyber resilience of their organization. Presenting cyber risk levels in monetary terms with actionable next steps is necessary to bring the board of directors on the same page and open an honest line of communication, while elevating their cybersecurity team to the role of value creator. ... CISOs are deeply wary about sharing too many details on their cybersecurity posture in the public domain, because of the unnecessary and preventable risk of exposing their organizations to cyberattacks, which are expected to cause $10.5 trillion in damages by 2025. Filing an honest 10K while preserving your organization’s cyber defenses requires a delicate balance. We’ve already seen Clorox fall victim when the balance was off. ... Given the pace at which the cybersecurity landscape is continuing to evolve, the CISO’s job is getting tougher. 

This Week in AI: OpenAI and publishers are partners of convenience

In an appearance on the “All-In” podcast, Altman said that he “definitely [doesn’t] think there will be an arms race for [training] data” because “when models get smart enough, at some point, it shouldn’t be about more data — at least not for training.” Elsewhere, he told MIT Technology Review’s James O’Donnell that he’s “optimistic” that OpenAI — and/or the broader AI industry — will “figure a way out of [needing] more and more training data.” Models aren’t that “smart” yet, leading OpenAI to reportedly experiment with synthetic training data and scour the far reaches of the web — and YouTube — for organic sources. But let’s assume they one day don’t need much additional data to improve by leaps and bounds. ... Through licensing deals, OpenAI effectively neutralizes a legal threat — at least until the courts determine how fair use applies in the context of AI training — and gets to celebrate a PR win. Publishers get much-needed capital. And the work on AI that might gravely harm those publishers continues.

Private equity looks to the CIO as value multiplier

A newer way of thinking about value creation focuses on IT, he says, because nearly every company, perhaps even the mom-and-pop coffee shop down the street, is a heavy IT user. “With this third wave, we’re seeing private equity firms retain in-house IT leadership, and that in-house IT leadership has led to more value creation,” Buccola says. “Firms with great IT leadership, a sound IT strategy, and a forward-thinking IT strategy, are creating more value.” ... “All roads lead to IT,” says Corrigan, a veteran of PE-backed firms, with World Insurance backed by Goldman Sachs and Charlesbank. “Every aspect of the business is dependent on some type of technology.” Corrigan sees CIOs being more frequently consulted when PE-back firms look to IT systems to drive operational efficiencies. In some cases, cutting costs is a quicker path to return on investment than revenue growth. “Every dollar you can cut out of the bottom line is worth several dollars of revenue generated,” he says. ... “The modern CIO in a private equity environment is no longer just a back-office role but a strategic partner capable of driving the business forward,” he says.

Sad Truth Is, Bad Tests Are the Norm!

When it comes to testing, many people seem to have the world view that hard-to-maintain tests are the norm and acceptable. In my experience, the major culprits are BDD frameworks that are based on text feature files. This is amplifying waste. The extra feature file layer in theory allows;The user to swap out the language at a later date; Allows a business person to write user stories and or acceptance criteria; Allows a business person to read the user stories and or acceptance criteria; Collaboration; Etc… You have actually added more complexity than you think, for little benefit. I am explicitly critiquing the approach of writing the extra feature file layer first, not the benefits of BDD as a concept. You test more efficiently, with better results not writing the feature file layer, such as with Smart BDD, where it’s generated by code. Here I compare the complexities and differences between Cucumber and Smart BDD. ... Culture is hugely important, I’m sure we and our bosses and senior leaders would all ultimately agree with the following:For more value, you need more feedback and less waste; For more feedback, you need more value and less waste; For less waste, you need more value and more feedback

6 Months Under the SEC’s Cybersecurity Disclosure Rules

There have been calls for regulatory harmonization. For example, the Biden-Harris Administration’s National Cybersecurity Strategy released last year calls for harmonization and streamlining of new and existing regulations to ease the burden of compliance. But in the meantime, enterprise leadership teams must operate in this complicated regulatory landscape, made only more complicated by budgetary issues. “Security budgets aren't growing for the most part. So, there's this tension between diverting resources to security versus diverting resources to compliance … on top of everything else that the CISOs have going on,” says Algeier. So, what should CISOs and enterprise leadership teams be doing as they continue to work under these SEC rules and other regulatory obligations? “CISOs should keep in mind the ability to quickly, easily, and efficiently fulfill the requirements laid out by the SEC, especially if they were to fall victim to an attack,” says Das. “This means having not only the right processes in place, but investments into tools that can ensure reporting occurs in the newly condensed timeline.”

Despite increased budgets, organizations struggle with compliance

“While regulations are driving strategy shifts and increased budgets, the talent shortage and fragmented infrastructure remain obstacles to compliance and resilience. To succeed, organizations must find the right balance between human expertise for complex situations and AI-enhanced automation tools for routine tasks. This will alleviate operational strain and ensure security professionals can focus on the parts of the job where human judgment is irreplaceable.” ... 93% of organizations report rethinking their cybersecurity strategy in the past year due to the rise of new regulations, with 58% stating they have completely reconsidered their approach. The strategy shifts are also impacting the roles of cybersecurity decision-makers, with 45% citing significant new responsibilities. 92% of organizations reported an increase in their allocated budgets. Among these organizations, a significant portion (36%) witnessed budget increases of 20% to 49%, and a notable 23% saw increases exceeding 50%. 

Fundamentals of Dimensional Data Modeling

Dimensional modeling focuses its diagramming on facts and dimensions:Facts contain crucial quantitative data to track business processes. Examples of these metrics include sales figures or number of subscriptions. Dimensions contain referential pieces of information. Examples of dimensions include customer name, price, date, or location. Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. ... Dimensional data modeling promises quick access to business insights when searching a data warehouse. Modelers provide a template to guide business conversations across various teams by selecting the business process, defining the grain, and identifying the dimensions and fact tables. Alignment in the design requires these processes, and Data Governance plays an integral role in getting there. 

Why the AI Revolution Is Being Led from Below

If shadow IT was largely defined by some teams’ use of unauthorized vendors and platforms, shadow AI is often driven by the use of AI tools like ChatGPT by individual employees and users, on their own and even surreptitiously. ... So why is that a problem? The proliferation of Shadow AI can deliver many of the same benefits as officially sanctioned AI strategies, streamlining processes, automating repetitive tasks, and enhancing productivity. Employees are mainly drawn to deploy their own AI tools for precisely these reasons — they can hand off chunks of taxing work to these invisible assistants. Some industry observers see the plus side of all this and are actively encouraging the “democratization” of AI tools. At this week’s The Financial Brand Forum 2024, Cornerstone Advisors’ Ron Shevlin made it his top recommendation: “My #1 piece of advice is ‘drive bottom-up use.’ Encourage widespread AI experimentation by your team members. Then document and share the process and output improvements as widely as possible.”

A Strategic Approach to Stopping SIM Swap Fraud

Fraudsters are cautious about their return on investment. SIM swap fraud is a high-risk endeavor, and they typically expect higher rewards. It involves the risk of physically visiting telco operator premises, obtaining genuine looking customer identification documents, using employees' mules, or bribing bank or telco staff. Their targets are mostly high-balance accounts, including both bank accounts and wallets. Over the years, we have learned that customers with substantial account balances might often share bank details and OTPs during social engineering schemes, but they typically refrain from sharing their PIN due to the perceived risk involved. Even if a small percentage of customers were to share their PIN, the risk would still be minimized, as the majority of potential victims would refrain from sharing their PIN. The fraudsters would need to compromise at three levels instead of two: data gathering, compromising the telco operator and persuading the customer. If customers detect something suspicious, they may become alert, resulting in fraudsters wasting their investments.

Complexity snarls multicloud network management

While each cloud provider does its best to make networking simple across clouds, all have very nuanced differences and varied best practices for approaching the same problem, says Ed Wood, global enterprise network lead at business advisory firm Accenture. This makes being able to create enterprise-ready, secured networks across the cloud challenging, he adds. Wasim believes that a lack of intelligent data utilization at crucial stages, from data ingestion to proactive management, further complicates the process. “The sheer scale of managing resources, coupled with the dynamic nature of cloud environments, makes it challenging to achieve optimal performance and efficiency.” Making network management even more challenging is a lack of clarity on roles and responsibilities. This can be attributed to an absence of agreement on shared responsibility models, Wasim says. As a result, stakeholders, including customers, cloud service providers, and any involved third parties, might each hold different views on responsibility and accountability regarding data compliance, controls, and cloud operations management.

Quote for the day:

"You may be disappointed if you fail, but you are doomed if you don't try." -- Beverly Sills

Daily Tech Digest - May 22, 2024

Guide to Kubernetes Security Posture Management (KSPM)

Bad security posture impacts your ability to respond to new and emerging threats because of extra “strain” on your security capabilities caused by misconfigurations, gaps in tooling, or inadequate training. ... GitOps manages all cluster changes via Configuration as Code (CaC) in Git, eliminating manual cluster modifications. This approach aligns with the Principle of Least Privilege and offers benefits beyond security. GitOps ensures deployment predictability, stability and admin awareness of the cluster’s state, preventing configuration drift and maintaining consistency across test and production clusters. Additionally, it reduces the number of users with write access, enhancing security. ... Human log analysis is crucial for retrospectively reviewing security incidents. However, real-time monitoring and correlation are essential for detecting incidents initially. While manual methods like SIEM solutions with dashboards and alerts can be effective, they require significant time and effort to extract relevant data. 

Where’s the ROI for AI? CIOs struggle to find it

The AI market is still developing, and some companies are adopting the technology without a specific use case in mind, he adds. Kane has seen companies roll out Microsoft Copilot, for example, without any employee training about its uses. ... “I have found very few companies who have found ROI with AI at all thus far,” he adds. “Most companies are simply playing with the novelty of AI still.” The concern about calculating the ROI also rings true to Stuart King, CTO of cybersecurity consulting firm AnzenSage and developer of an AI-powered risk assessment tool for industrial facilities. With the recent red-hot hype over AI, many IT leaders are adopting the technology before they know what to do with it, he says. “I think back to the first discussions that we had within the organizations that are working with, and it was a case of, ‘Here’s this great new thing that we can use now, let’s go out and find a use for it,’” he says. “What you really want to be doing is finding a problem to solve with it first.” As a developer who has integrated AI into his own software, King is not an AI skeptic. 

100 Groups Urge Feds to Put UHG on Hook for Breach Notices

Some experts advise HIPAA-regulated entities that are likely affected by a Change Healthcare breach to take precautionary measures now to prepare for their potential notification duties involving a compromise of their patients' PHI. ... HIPAA-regulated Change Healthcare customers also have an obligation under HIPAA to perform "reasonable diligence" to investigate and obtain information about the incident to determine whether the incident triggers notice obligations to their patients or members, said attorney Sara Goldstein of law firm BakerHostetler. Reasonable diligence includes Change Healthcare customers frequently checking UHG and Optum's websites for updates on the restoration and data analysis process, contacting their Change Healthcare account representative on a regular basis to see if there are any updates specific to their organization, and engaging outside privacy counsel to submit a request for information directly to UnitedHealth Group to obtain further information about the incident, Goldstein said.

‘Innovation Theater’ in Banking Gives Way to a More Realistic and Productive Function

The conservative approach many institutions are taking to GenAI reflects that reality. Buy Now, Pay Later meanwhile makes a great example of how exciting new innovations can unexpectedly reveal a dark side. ... In many institutions, innovation has become less about pure invention and more about applying what’s out there already in new ways and combinations to solve common problems. Doing so doesn’t necessarily require geniuses, but you do need highly specialized “plumbers” who can link together multiple technologies in smart ways. Even the regulatory view has evolved. There was a time when federal regulators held open doors to innovation, even to the extent of offering “sandboxes” to let innovations sprout without weighing them down initially with compliance burdens. But the Consumer Financial Protection Bureau, under the Biden administration, did away with its sandbox early on. Washington today walks a more cautious line on innovation, and that line could veer. The bottom line? Innovators who take their jobs, and the impact of their jobs, seriously, realize that banking innovation must grow up.

AI glasses + multimodal AI = a massive new industry

Both OpenAI and Google demos clearly reveal a future where, thanks to the video mode in multimodal AI, we’ll be able to show AI something, or a room full of somethings, and engage with a chatbot to help us know, process, remember or understand. It would be all very natural, except for one awkward element. All this holding and waving around of phones to show it what we want it so “see” is completely unnatural. Obviously — obviously! — video-enabled multimodal AI is headed for face computers, a.k.a. AI glasses. And, in fact, one of the most intriguing elements of the Google demo was that during a video demonstration, the demonstrator asked Astra-enhanced Gemini if it remembered where her glasses were, and it directed her back to a table, where she picked up the glasses and put them on. At that point, the glasses — which were prototype AI glasses — seamlessly took over the chat session from the phone (the whole thing was surely still running on the phone, with the glasses providing the camera, microphones and so on).

Technological complexity drives new wave of identity risks

The concept zero standing privilege (ZSP) requires that a user only be granted the minimum levels of access and privilege needed to complete a task, and only for a limited amount of time. Should an attacker gain entry to a user’s account, ZSP ensures there is far less potential for attackers to access sensitive data and systems. The study found that 93% of security leaders believe ZSP is effective at reducing access risks within their organization. Additionally, 91% reported that ZSP is being enforced across at least some of their company’s systems. As security leaders face greater complexity across their organizations’ systems and escalating attacks from adversaries, it’s no surprise that risk reduction was cited as respondents’ top priority for identity and access management (55%). This was followed by improving team productivity (50%) and automating processes (47%). Interestingly, improving user experience was cited as the top priority among respondents who experienced multiple instances of attacks or breaches due to improper access in the last year.

The Legal Issues to Consider When Adopting AI

Different types of data bring different issues of consent and liability. For example, consider whether your data is personally identifiable information, synthetic content (typically generated by another AI system), or someone else’s intellectual property. Data minimization—using only what you need—is a good principle to apply at this stage. Pay careful attention to how you obtained the data. OpenAI has been sued for scraping personal data to train its algorithms. And, as explained below, data-scraping can raise questions of copyright infringement. ... Companies also need to consider the potential forinadvertent leakage of confidential and trade-secret information by an AI product. If allowing employees to internally use technologies such as ChatGPT (for text) and Github Copilot (for code generation), companies should note that such generative AI tools often take user prompts and outputs as training data to further improve their models. Luckily, generative AI companies typically offer more secure services and the ability to opt out of model training.

How innovative power sourcing can propel data centers toward sustainability

The increasing adoption of Generative AI technologies over the past few years has placed unprecedented energy demands on data centers, coinciding with a global energy emergency exacerbated by geopolitical crises. Electricity prices have since reached record highs in certain markets, while oil prices soared to their highest level in over 15 years. Volatile energy markets have awakened a need in the general population to become more flexible in their energy use. At the same time, the trends present an opportunity for the data center sector to get ahead of the game. By becoming managers of energy, as opposed to just consumers, market players can find more efficient and cost-effective ways to source power. Innovative renewable options present a highly attractive avenue in this regard. As a result, data center providers are working more collaboratively with the energy sector for solutions. And for them, it’s increasingly likely that optimizing efficiency won’t be just about being close to the grid, but also about being close to the power-generation site – or even generating and storing power on-site.

Google DeepMind Introduces the Frontier Safety Framework

Existing protocols for AI safety focus on mitigating risks from existing AI systems. Some of these methods include alignment research, which trains models to act within human values, and implementing responsible AI practices to manage immediate threats. However, these approaches are mainly reactive and address present-day risks, without accounting for the potential future risks from more advanced AI capabilities. In contrast, the Frontier Safety Framework is a proactive set of protocols designed to identify and mitigate future risks from advanced AI models. The framework is exploratory and intended to evolve as more is learned about AI risks and evaluations. It focuses on severe risks resulting from powerful capabilities at the model level, such as exceptional agency or sophisticated cyber capabilities. The Framework aims to align with existing research and Google’s suite of AI responsibility and safety practices, providing a comprehensive approach to preventing any potential threats.

Proof-of-concept quantum repeaters bring quantum networks a big step closer

There are two main near-term use cases for quantum networks. The first use case is to transmit encryption keys. The idea is that public key encryption – the type currently used to secure Internet traffic – could soon be broken by quantum computers. Symmetrical encryption – where the same key is used to both encrypt and decrypt messages – is more future proof, but you need a way to get that key to the other party. ... Today, however, the encryption we currently have is good enough, and there’s no immediate need for companies to look for secure quantum networks. Plus, there’s progress already being made on creating quantum-proof encryption algorithms. The other use for quantum networks is to connect quantum computers. Since quantum networks transmit entangled photons, the computers so connected would also be entangled, theoretically allowing for the creation of clustered quantum computers that act as a single machine. “There are ideas for how to take quantum repeaters and parallelize them to provide very high connectivity between quantum computers,” says Oskar Painter, director of quantum hardware at AWS. 

Quote for the day:

"Many of life’s failures are people who did not realize how close they were to success when they gave up." -- Thomas Edison

Daily Tech Digest - May 21, 2024

Most Software Engineers Know Nothing About Hardware

While most software engineers would want to believe that there is not a need for them to know the intricacies of hardware, as long as what they are using offers support for the software they want to use and build. But on the contrary, a user offered a thought-provoking take, suggesting that understanding hardware could bolster several fields, such as cybersecurity. “I think it would help in programming to know how the chip and memory think only to secure the program from hackers,” he said. This highlights a practical benefit of hardware knowledge that goes beyond mere academic interest. Moreover, software engineers who know a thing or two about hardware can create better softwares and build good software capability on the hardware. This perspective suggests that a deeper understanding of hardware can lead to more efficient and innovative software solutions. The roles of software engineers are also changing with the advent of AI tools. For over a decade, a popular belief has been that a computer science degree is all you need to tread the path to wealth, especially in a country like India. 

Network teams are ready to switch tool vendors

For a variety of reasons, network management tools have historically been sticky in IT organizations. First, tool vendors sold them with perpetual licenses, which meant a long-term investment. Second, tools could take time to implement, especially for larger companies that invest months of time customizing data collection mechanisms, dashboards, alerts, and more. Also, many tools were difficult to use, so they came with a learning curve. But things have changed. Most network management tools are now available as SaaS solutions with a subscription license. Many vendors have developed new automation features and AI-driven features that reduce the amount of customization that some IT organizations will need to do. ... For all these reasons, many IT organizations feel less locked into their network management tools today. Still, it’s important to note that replacing tools remains challenging. In fact, network teams that struggle to hire and retain skilled personnel are less likely to replace a tool. They don’t have the capacity to tackle such a project because they’re barely keeping up with day-to-day operations. Larger enterprises, which have larger and more complex networks, were also less open to new tools.

Reducing CIO-CISO tension requires recognizing the signs

In the case of highly critical vulnerabilities that have been exploited, the CISO will want patches applied immediately, and the CIO is likely aligned with this urgency. But for medium-level patches, the CIO may be under pressure to defer these disruptions to production systems, and may push back on the CISO to wait a week or even months before patching. ... Incident management is another are ripe for tension. The CISO has a leadership role to play when there is a serious cyber or business disruption incident, and is often the“messenger” that shares the bad news. Naturally, the CIO wants to be immediately informed, but often the details are sparse with many unknowns. This can make the CISO look bad to the CIO, as there are often more questions than answers at this early stage. ... A fifth example is DevOps, as many CIOs, including myself, advocate for continuous delivery at velocity. Unfortunately, not as many CIOs advocate for DevSecOps to embed cybersecurity testing in the process. This is perhaps because the CIO is often under pressure from executive stakeholders to release new software builds and thus accept the risk that there may be some iteration required if this is not perfect.

Strategies for combating AI-enhanced BEC attacks

In addition to employee training and a zero-trust approach, companies should leverage continuous monitoring and risk-based access decisions. Security teams can use advanced analytics to monitor user activity and identify anomalies that might indicate suspicious behavior. Additionally, zero trust allows for implementing risk-based access controls – for example, access from an unrecognized location might trigger a stronger authentication challenge or require additional approval before granting access. Security teams can also use network segmentation to contain threats. This involves dividing the network into smaller compartments. So, even if attackers manage to breach one section, their movement is restricted, preventing them from compromising the entire network. ... Building a robust defense against BEC attacks requires a layered approach. Comprehensive security strategies that leverage zero trust are a must. However, they can’t do all the heavy lifting alone. Businesses must also empower their employees to make the right decisions by investing in security awareness training that incorporates real-world scenarios and teaches employees how to identify and report suspicious activities.

From sci-fi to reality: The dawn of emotionally intelligent AI

Greater ability to integrate audio, visual and textual data opens potentially transformative opportunities in sectors like healthcare, where it could lead to more nuanced patient interaction and personalized care plans. ... As GPT-4o and similar offerings continue to evolve, we can anticipate more sophisticated forms of natural language understanding and emotional intelligence. This could lead to AI that not only understands complex human emotions but also responds in increasingly appropriate and helpful ways. The future might see AI becoming an integral part of emotional support networks, providing companionship and aid that feels genuinely empathetic and informed. The journey of AI from niche technology to a fundamental part of our daily interactions is both exhilarating and daunting. To navigate this AI revolution responsibly, it is essential for developers, users and policymakers to engage in a rigorous and ongoing dialogue about the ethical use of these technologies. As GPT-4o and similar AI tools become more embedded in our daily lives, we must navigate this transformative journey with wisdom and foresight, ensuring AI remains a tool that empowers rather than diminishes our humanity.

Unlocking DevOps Mastery: A Comprehensive Guide to Success

From code analysis and vulnerability scanning to access control and identity management, organizations must implement comprehensive security controls to mitigate risks throughout the software development lifecycle. Furthermore, compliance with industry standards and regulatory requirements must be baked into the DevOps process from the outset rather than treated as an afterthought. Moreover, organizations must be vigilant about ethical considerations and algorithmic bias in environments leveraging AI and machine learning, where the stakes are heightened. By embedding security and compliance into every stage of the DevOps pipeline, organizations can build trust and confidence among stakeholders and mitigate potential risks to their reputation and bottom line. DevSecOps, an extension of DevOps, emphasizes integrating security practices throughout the software development lifecycle (SDLC). Several key security practices and frameworks should be integrated into the DevOps program. 

Composable Enterprise: The Evolution of MACH and Jamstack

As the Jamstack and the MACH Architecture continue to evolve, categorizing the MACH architecture as “Jamstack for the enterprise” might not entirely be accurate, but it’s undeniable that the MACH approach has been gaining traction among vendors and has increasing appeal to enterprise customers. Demeny points out that the MACH Alliance recently celebrated passing the 100 certified member mark, and believes that the organization and the MACH architecture are entering a new phase. The MACH approach has been gaining traction among vendors and has increasing appeal to enterprise customers. “This also means that the audience profile of the MACH community and buyers is starting to shift a bit from developers to more business-focused stakeholders,” said Demeny. ”As a result, the Alliance is producing more work around interoperability understanding and standards in order to help these newer stakeholders understand and navigate the landscape.” Regardless of what tech stack developers and organizations choose, the evolution of the Jamstack and the MACH architecture are providing more options and flexibility for developers. 

The Three As of Building A+ Platforms: Acceleration, Autonomy, and Accountability

If the why is about creating value for the business, the what is all about driving velocity for your users, bringing delight to your users, and making your users awesome at what they do. This requires bringing a product mindset to building a platform. ... This is where I found it very useful to think in terms of the Double Diamond framework, where the first diamond is about product discovery and problem definition and the second is about building a solution. While in the first diamond you can do divergent thinking and ideation, either widely or deeply, the second diamond allows for action-oriented, focused thinking that converges into developing and delivering the solution. ... Platforms cannot be shaky - solid fundamentals (Reliability, Security, Privacy, Compliance, disruption) and operational excellence are tablestakes, not a nice-to-have. Our platforms have to be stable. In our case, we decided to put a stop to all feature delivery for about a quarter, did a methodical analysis of all the failures that led to the massive drop in deploy rates, and focused on crucial reliability efforts until we brought this metric back up to 99%+.

Training LLMs: Questions Rise Over AI Auto Opt-In by Vendors

"Organizations who use these technologies must be clear with their users about how their information will be processed," said John Edwards, Britain's Information Commissioner, in a speech last week at the New Scientist Emerging Technologies summit in London. "It's the only way that we continue to reap the benefits of AI and emerging technologies." Whether opting in users by default complies with GDPR remains an open question. "It's hard to think how an opt-out option can work for AI training data if personal data is involved," Armstrong said. "Unless the opt-out option is really prominent - for example, clear on-screen warnings; burying it in the terms and conditions won't be enough - that's unlikely to satisfy GDPR's transparency requirements." Clear answers remain potentially forthcoming. "Many privacy leaders have been grappling with questions around topics such as transparency, purpose limitation and grounds to process in relation to the use of personal data in the development and use of AI," said law firm Skadden, Arps, Slate, Meagher & Flom LLP, in a response to a request from the U.K. government to domestic regulators to detail their approach to AI. 

Data Owner vs. Data Steward: What’s the Difference?

Data owners (also called stakeholders) are often senior leaders or bosses within the organization, who have taken responsibility for managing the data in their specific department or business area. For instance, the director of marketing or the head of production are often data owners because the data used by their staff is critical to their operations. It is a position that requires both maturity and experience. Data owners are also responsible for implementing the security measures necessary for protecting the data they own – encryption, firewalls, access controls, etc. The data steward, on the other hand, is responsible for managing the organization’s overall Data Governance policies, monitoring compliance, and ensuring the data is of high quality. They also oversee the staff, as a form of the data police, to ensure they are following the guidelines that support high-quality data. ... Data stewards can offer valuable recommendations and insights to data owners, and vice versa. Regular meetings and collaboration between the data steward and data owners are necessary for successful Data Governance and management.

Quote for the day:

"Pursue one great decisive aim with force and determination." -- Carl Von Clause Witz

Daily Tech Digest - May 18, 2024

AI imperatives for modern talent acquisition

In talent acquisition, the journey ahead promises to be tougher than ever. Recruiters face a paradigm shift, moving beyond traditional notions of filling vacancies to addressing broader business challenges. The days of simply sourcing candidates are long gone; today's TA professionals must navigate complexities ranging from upskilling and reskilling to mobility and contracting. ... At the heart of it lies a structural shift reshaping the global workforce. Demographic trends, such as declining birth rates, paint a sobering picture of a world where there simply aren't enough people to fill available roles. This demographic drought isn't limited to a single region; it's a global phenomenon with far-reaching implications. Compounding this challenge is the changing nature of careers. No longer tethered to a single company, employees are increasingly empowered to seek out opportunities that align with their aspirations and values. This has profound implications for talent retention and development, necessitating a shift towards systemic HR strategies that prioritise upskilling, mobility, and employee experience.

Ineffective scaled agile: How to ensure agile delivers in complex systems

When developing a complex system it’s impossible to uncover every challenge even with the most in-depth upfront analysis. One way of dealing with this is by implementing governance that emphasizes incorporating customer feedback, active leadership engagement and responding to changes and learnings. Another challenge can arise when teams begin to embrace working autonomously. They start implementing local optimizations which can lead to inefficiencies. The key is that the governance approach should make sure that the overall work is broken down into value increments per domain and then broken down further into value increments per team in regular time intervals. This creates a shared sense of purpose across teams and guides them towards the same goal. Progress can then be tracked using the working system as the primary measure of progress. Those responsible for steering the overall program need to facilitate feedback and prioritization discussions, and should encourage the leadership to adapt to internal insights or changes in the external environment.

How to navigate your way to stronger cyber resilience

If an organization doesn’t have a plan for what to do if a security incident takes place, they risk finding themselves in the precarious position of not knowing how to react to events, and consequently doing nothing or the wrong thing. The report also shows that just over a third of the smaller companies worry that senior management doesn’t see cyberattacks as a significant risk. How can they get greater buy-in from their management team on the importance of cyber risks? It’s important to understand that this is not a question of management failure. It is hard for business leaders to engage with or care about something they don’t fully understand. The onus is on security professionals to speak in a language that business leaders understand. They need to be storytellers and be able to explain how to protect brand reputation through proactive, multi-faceted defense programs. Every business leader understands the concept of risk. If in doubt, present cybersecurity threats, challenges, and opportunities in terms of how they relate to business risk.

DDoS attacks: Definition, examples, and techniques

DDoS botnets are the core of any DDoS attack. A botnet consists of hundreds or thousands of machines, called zombies or bots, that a malicious hacker has gained control over. The attackers will harvest these systems by identifying vulnerable systems that they can infect with malware through phishing attacks, malvertising attacks, and other mass infection techniques. The infected machines can range from ordinary home or office PCs to DDoS devices—the Mirai botnet famously marshalled an army of hacked CCTV cameras—and their owners almost certainly don’t know they’ve been compromised, as they continue to function normally in most respects. The infected machines await a remote command from a so-called command-and-control server, which serves as a command center for the attack and is often itself a hacked machine. Once unleashed, the bots all attempt to access some resource or service that the victim makes available online. Individually, the requests and network traffic directed by each bot towards the victim would be harmless and normal. 

7 ways to use AI in IT disaster recovery

The integration of AI into IT disaster recovery is not just a trendy addition; it's a significant enhancement that can lead to quicker response times, reduced downtime and overall improved business continuity. By proactively identifying risks, optimizing resources and continuously learning from past incidents, AI offers a forward-thinking approach to disaster recovery that could be the difference between a minor IT hiccup and a significant business disruption. ... A significant portion of IT disasters are due to cyberthreats. AI and machine learning can help mitigate these issues by continuously monitoring network traffic, identifying potential threats and taking immediate action to mitigate risks. Most new cybersecurity businesses are using AI to learn about emerging threats. They also use AI to look at system anomalies and block questionable activity. ... AI can optimize the use of available resources, ensuring that critical functions receive the necessary resources first. This optimization can greatly increase the efficiency of the recovery process and help organizations working with limited resources.

Underwater datacenters could sink to sound wave sabotage

In a paper available on the arXiv open-access repository, the researchers detail how sound at a resonant frequency of the hard disk drives (HDDs) deployed in submerged enclosures can cause throughput reduction and even application crashing. HDDs are still widely used in datacenters, despite their obituary having been written many times, and are typically paired with flash-based SSDs. The researchers focused on hybrid and full-HDD architectures to evaluate the impact of acoustic attacks. The researchers found that sound at the right resonance frequency would induce vibrations in the read-write head and platter of the disks by vibration propagation, proportional to the acoustic pressure, or intensity of the sound. This affects the disk's read/write performance. For the tests, a Supermicro rack server configured with a RAID 5 storage array was placed inside a metal enclosure in two scenarios; an indoor laboratory water tank and an open-water testing facility, which was actually a lake on the Florida University campus. Sound was generated from an underwater speaker.

Agile Design, Lasting Impact: Building Data Centers for the AI Era

While there is a clear need for more data centers, the development timeline of building new, modern data centers incorporating these technologies and regulatory adaptations is currently between three to five years (more in some cases). And not just that, the fast pace at which technology is evolving means manufacturers are likely to face the need to rethink strategy and innovation mid-build to accommodate further advancements. ... This is a pivotal moment for our industry and what’s built today could influence what’s possible tomorrow. We’ve had successful adaptations before, but due to the current pace of evolution, future builds need to be able to accommodate retrofits to ensure they remain fit for purpose. It's crucial to strike a balance between meeting demand, adhering to regulations, and designing for adaptability and durability to stay ahead. We might see a rise in smaller, colocation data centers offering flexibility, reduced latency, and cost savings. At the same time, medium players could evolve into hyperscalers, with the right vision to build something suitable to exist in the next hype cycle.

Quantum internet inches closer: Qubits sent 22 miles via fiber optic cable

Even as the biggest names in the tech industry race to build fault-tolerant quantum computers, the transition from binary to quantum can only be completed with a reliable internet connection to transmit the data. Unlike binary bits transported as light signals inside a fiber optic cable that can be read, amplified, and transmitted over long distances, quantum bits (qubits) are fragile, and even attempting to read them changes their state. ... Researchers in the Netherlands, China, and the US separately demonstrated how qubits could be stored in “quantum memory” and transmitted over the fiber optic network. Ronald Hanson and his team at the Delft University of Technology in the Netherlands encoded qubits in the electrons of nitrogen atoms and nuclear states of carbon atoms of the small diamond crystals that housed them. An optical fiber cable traveled 25 miles from the university to another laboratory in Hague to establish a link with similarly embedded nitrogen atoms in diamond crystals.

Cyber resilience: Safeguarding your enterprise in a rapidly changing world

In an era defined by pervasive digital connectivity and ever-evolving threats, cyber resilience has become a crucial pillar of survival and success for modern-day enterprises. It represents an organisation’s capacity to not just withstand and recover from cyberattacks but also to adapt, learn, and thrive in the face of relentless and unpredictable digital challenges. ... Due to the crippling effects a cyberattack can have on a nation, governments and regulatory bodies are also working to develop guidelines and standards which encourage organisations to embrace cyber resilience. For instance, the European Parliament recently passed the European Cyber Resilience Act (CRA), a legal framework to describe the cybersecurity requirements for hardware and software products placed on the European market. It aims to ensure manufacturers take security seriously throughout a product’s lifecycle. In other regions, such as India, where cybersecurity adoption is comparatively evolving, the onus falls on industry leaders to work with governmental bodies and other enterprises to encourage the development and adoption of similar obligations. 

How to Build Large Scale Cyber-Physical Systems

There are several challenges in building hardware-reliant cyber-physical systems, such as hardware lead times, organisational structure, common language, system decomposition, cross-team communication, alignment, and culture. People engaged in the development of large-scale safety-critical systems need line of sight to business objectives, Yeman said. Each team should be able to connect their daily work to those objectives. Yeman suggested communicating the objectives through the intent and goals of the system as opposed to specific tasks. An example of an intent-based system objective would be to ensure the system can communicate to military platforms securely as opposed to specifically defining that the system must communicate via link-16, she added. Yeman advised breaking the system problem down into smaller solvable problems. With each of those problems resolve what is known first and then resolve the unknown through a series of experiments, she said. This approach allows you to iteratively and incrementally build a continuously validated solution.

Quote for the day:

"Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni