Daily Tech Digest - May 29, 2024

Algorithmic Thinking for Data Scientists

While data scientists with computer science degrees will be familiar with the core concepts of algorithmic thinking, many increasingly enter the field with other backgrounds, ranging from the natural and social sciences to the arts; this trend is likely to accelerate in the coming years as a result of advances in generative AI and the growing prevalence of data science in school and university curriculums. ... One topic that deserves special attention in the context of algorithmic problem solving is that of complexity. When comparing two different algorithms, it is useful to consider the time and space complexity of each algorithm, i.e., how the time and space taken by each algorithm scales relative to the problem size (or data size). ... Some algorithms may manifest additive or multiplicative combinations of the above complexity levels. E.g., a for loop followed by a binary search entails an additive combination of linear and logarithmic complexities, attributable to sequential execution of the loop and the search routine, respectively.


Job seekers and hiring managers depend on AI — at what cost to truth and fairness?

The darker side to using AI in hiring is that it can bypass potential candidates based on predetermined criteria that don’t necessarily take all of a candidate’s skills into account. And for job seekers, the technology can generate great-looking resumes, but often they’re not completely truthful when it comes to skill sets. ... “AI can sound too generic at times, so this is where putting your eyes on it is helpful,” Toothacre said. She is also concerned about the use of AI to complete assessments. “Skills-based assessments are in place to ensure you are qualified and check your knowledge. Using AI to help you pass those assessments is lying about your experience and highly unethical.” There’s plenty of evidence that genAI can improve resume quality, increase visibility in online job searches, and provide personalized feedback on cover letters and resumes. However, concerns about overreliance on AI tools, lack of human touch in resumes, and the risk of losing individuality and authenticity in applications are universal issues that candidates need to be mindful of regardless of their geographical location, according to Helios’ Hammell.


Comparing smart contracts across different blockchains from Ethereum to Solana

Polkadot is designed to enable interoperability among various blockchains through its unique architecture. The network’s core comprises the relay chain and parachains, each playing a distinct role in maintaining the system’s functionality and scalability. ... Developing smart contracts on Cardano requires familiarity with Haskell for Plutus and an understanding of Marlowe for financial contracts. Educational resources like the IOG Academy provide learning paths for developers and financial professionals. Tools like the Marlowe Playground and the Plutus development environment aid in simulating and testing contracts before deployment, ensuring they function as intended. ... Solana’s smart contracts are stateless, meaning the contract logic is separated from the state, which is stored in external accounts. This separation enhances security and scalability by isolating the contract code from the data it interacts with. Solana’s account model allows for program reusability, enabling developers to create new tokens or applications by interacting with existing programs, reducing the need to redeploy smart contracts, and lowering costs.


3 things CIOs can do to make gen AI synch with sustainability

“If you’re only buying inference services, ask them how they can account for all the upstream impact,” says Tate Cantrell, CTO of Verne, a UK-headquartered company that provides data center solutions for enterprises and hyperscalers. “Inference output takes a split second. But the only reason those weights inside that neural network are the way they are is because of massive amounts of training — potentially one or two months of training at something like 100 to 400 megawatts — to get that infrastructure the way it is. So how much of that should you be charged for?” Cantrell urges CIOs to ask providers about their own reporting. “Are they doing open reporting about the full upstream impact that their services have from a sustainability perspective? How long is the training process, how long is it valid for, and how many customers did that weight impact?” According to Sundberg, an ideal solution would be to have the AI model tell you about its carbon footprint. “You should be able to ask Copilot or ChatGPT what the carbon footprint of your last query is,” he says. 


EU’s ChatGPT taskforce offers first look at detangling the AI chatbot’s privacy compliance

The taskforce’s report discusses this knotty lawfulness issue, pointing out ChatGPT needs a valid legal basis for all stages of personal data processing — including collection of training data; pre-processing of the data (such as filtering); training itself; prompts and ChatGPT outputs; and any training on ChatGPT prompts. The first three of the listed stages carry what the taskforce couches as “peculiar risks” for people’s fundamental rights — with the report highlighting how the scale and automation of web scraping can lead to large volumes of personal data being ingested, covering many aspects of people’s lives. It also notes scraped data may include the most sensitive types of personal data (which the GDPR refers to as “special category data”), such as health info, sexuality, political views etc, which requires an even higher legal bar for processing than general personal data. On special category data, the taskforce also asserts that just because it’s public does not mean it can be considered to have been made “manifestly” public — which would trigger an exemption from the GDPR requirement for explicit consent to process this type of data.


Avoiding the cybersecurity blame game

Genuine negligence or deliberate actions should be handled appropriately, but apportioning blame and meting out punishment must be the final step in an objective, reasonable investigation. It should certainly not be the default reaction. So far, so reasonable, yes? But things are a little more complicated than this. It’s all very well saying, “don’t blame the individual, blame the company”. Effectively, no “company” does anything; only people do. The controls, processes and procedures that let you down were created by people – just different people. If we blame the designers of controls, processes and procedures… well, we are just shifting blame, which is still counterproductive. ... Managers should use the additional resources to figure out how to genuinely change the work environment in which employees operate and make it easier for them to do their job in a secure practical manner. Managers should implement a circular, collaborative approach to creating a frictionless, safer environment, working positively and without blame.


The decline of the user interface

The Ok and Cancel buttons played important roles. A user might go to a Settings dialog, change a bunch of settings, and then click Ok, knowing that their changes would be applied. But often, they would make some changes and then think “You know, nope, I just want things back like they were.” They’d hit the Cancel button, and everything would reset to where they started. Disaster averted. Sadly, this very clear and easy way of doing things somehow got lost in the transition to the web. On the web, you will often see Settings pages without Ok and Cancel buttons. Instead, you’re expected to click an X in the upper right to make the dialog close, accepting any changes that you’ve made. ... In the newer versions of Windows, I spend a dismayingly large amount of time trying to get the mouse to the right spot in the corner or edge of an application so that I can size it. If I want to move a window, it is all too frequently difficult to find a location at the top of the application to click on that will result in the window being relocated. Applications used to have a very clear title bar that was easy to see and click on.


Lawmakers paint grim picture of US data privacy in defending APRA

At the center of the debate is the American Privacy Rights Act (APRA), the push for a federal data privacy law that would either simplify a patchwork of individual state laws – or run roughshod over existing privacy legislation, depending on which state is offering an opinion. While harmonizing divergent laws seems wise as a general measure, states like California, where data privacy laws are already much stricter than in most places, worry about its preemptive clauses weakening their hard-fought privacy protections. Rodgers says APRA is “an opportunity for a reset, one that can help return us to the American Dream our Founders envisioned. It gives people the right to control their personal information online, something the American people overwhelmingly want,” she says. “They’re tired of having their personal information abused for profit.” From loose permissions on sharing location data to exposed search histories, there are far too many holes in Americans’ digital privacy for Rodgers’ liking. Pointing to the especially sensitive matter of childrens’ data, she says that “as our kids scroll, companies collect nearly every data point imaginable to build profiles on them and keep them addicted. ...”


Picking an iPaaS in the Age of Application Overload

Companies face issues using proprietary integration solutions, as they end up with black-box solutions with limited flexibility. For example, the inability to natively embed outdated technology into modern stacks, such as cloud native supply chains with CI/CD pipelines, can slow down innovation and complicate the overall software delivery process. Companies should favor iPaaS technologies grounded in open source and open standards. Can you deploy it to your container orchestration cluster? Can you plug it into your existing GitOps procedures? Such solutions not only ensure better integration into proven QA-tested procedures but also offer greater freedom to migrate, adapt and debug as needs evolve. ... As organizations scale, so too must their integration solutions. Companies should avoid iPaaS solutions offering only superficial “cloud-washed” capabilities. They should prioritize cloud native solutions designed from the ground up for the cloud, and that leverage container orchestration tools like Kubernetes and Docker Swarm, which are essential for ensuring scalability and resilience.
Shifting left is a cultural and practice shift, but it also includes technical changes to how a shared testing environment is set up. ... The approach scales effectively across engineering teams, as each team or developer can work independently on their respective services or features, thereby reducing dependencies. While this is great advice, it can feel hard to implement in the current development environment: If the process of releasing code to a shared testing cluster takes too much time, it doesn’t seem feasible to test small incremental changes. ... The difference between finding bugs as a user and finding them as a developer is massive: When an operations or site reliability engineer (SRE) finds a problem, they need to find the engineer who released the code, describe the problem they’re seeing, and present some steps to replicate the issue. If, instead, the original developer finds the problem, they can cut out all those steps by looking at the output, finding the cause, and starting on a fix. This proactive approach to quality reduces the number of bugs that need to be filed and addressed later in the development cycle.



Quote for the day:

"The best and most beautiful things in the world cannot be seen or even touched- they must be felt with the heart." -- Helen Keller

Daily Tech Digest - May 28, 2024

Partitioning an LLM between cloud and edge

By partitioning LLMs, we achieve a scalable architecture in which edge devices handle lightweight, real-time tasks while the heavy lifting is offloaded to the cloud. For example, say we are running medical scanning devices that exist worldwide. AI-driven image processing and analysis is core to the value of those devices; however, if we’re shipping huge images back to some central computing platform for diagnostics, that won’t be optimal. Network latency will delay some of the processing, and if the network is somehow out, which it may be in several rural areas, then you’re out of business. ... The first step involves evaluating the LLM and the AI toolkits and determining which components can be effectively run on the edge. This typically includes lightweight models or specific layers of a larger model that perform inference tasks. Complex training and fine-tuning operations remain in the cloud or other eternalized systems. Edge systems can preprocess raw data to reduce its volume and complexity before sending it to the cloud or processing it using its LLM.


How ISRO fosters a culture of innovation

As people move up the corporate totem pole their attention to detail gives way to big-picture thinking, and rightly so. You can’t look beyond and yet mind your every step on the way to an uncharted terrain. Yet when it comes to research and development, especially high-risk, high-impact projects, there is hardly any trade-off between thinking big and thinking in detail. You must do both. For instance, in the inaugural session of my last workshop, one of the senior directors was invited and the first thing he noticed was the mistake in the session duration. ... Now imagine this situation in a corporate context. How likely is the boss to call out a rather silly mistake? It was innocuous for all practical purposes. Most won’t point it out, let alone address it immediately. But not at ISRO.  ... Here’s the interesting thing. One of the participants was incessantly quizzing me, bordering on a challenge, and everyone was nonchalant about it. In a typical corporate milieu, such people would be shunned or would be asked to shut up. But not here. We had a volley of arguments, and people around seemed to enjoy it and encourage it. They were not only okay with varied points of view but also protective of it. 


GoDaddy has 50 large language models; its CTO explains why

“What we’ve done is built a common gateway that talks to all the various large language models on the backend, and currently we support more than 50 different models, whether they’re for images, text or chat, or whatnot. ... “Obviously, this space is accelerating superfast. A year ago, we had zero LLMs and today we have 50 LLMs. That gives you some indication of just how fast this is moving. Different models will have different attributes and that’s something we’ll have to continue to monitor. But by having that mechanism we can monitor with and control what we send and what we receive, we believe we can better manage that.” ... “In some ways, experiments that aren’t successful are some of the most interesting ones, because you learn what doesn’t work and that forces you to ask follow-up questions about what will work and to look at things differently. As teams saw the results of these experiments and saw the impact on customers, it’s really engaged them to spend more time with the technology and focus on customer outcomes.”


How to combat alert fatigue in cybersecurity

Alert fatigue is the result of several related factors. First, today’s security tools generate an incredible volume of event data. This makes it difficult for security practitioners to distinguish between background noise and serious threats. Second, many systems are prone to false positives, which are triggered either by harmless activity or by overly sensitive anomaly thresholds. This can desensitize defenders who may end up missing important attack signals. The third factor contributing to alert fatigue is the lack of clear prioritization. The systems generating these alerts often don’t have mechanisms that triage and prioritize the events. This can lead to paralyzing inaction because the practitioners don’t know where to begin. Finally, when alert records or logs do not contain sufficient evidence and response guidance, defenders are unsure of the next actionable steps. This confusion wastes valuable time and contributes to frustration and fatigue. ... The elements of the “SOC visibility triad” I mentioned earlier – NDR, EDR, and SIEM are among the critical new technologies that can help.


Driving buy-in: How CIOs get hesitant workforces to adopt AI

If willingness and skill are the two main dimensions that influence hesitancy toward AI, employees who question whether taking the time to learn the technology is worth the effort are at the intersection. These employees often believe the AI learning curve is too steep to justify embarking on in the first place, he notes. “People perceive that AI is something complex, probably because of all of these movies. They worry: Will they have time and effort to learn these new skills and to adapt to these new systems?” Jaksic says. This challenge is not unique to AI, he adds. “We all prefer familiar ways of working, and we don’t like to disrupt our established day-to-day activities,” he says. Perhaps the best inroads then is to show that learning enough about AI to use it productively does not require a monumental investment. To this end, Jaksic has structured a formal program at KEO for AI education in bite-size segments. The program, known as Summer of Innovation, is organized around lunchtime sessions taught by senior leaders around high-level AI concepts. 


Taking Gen AI mainstream with next-level automation

Gen AI needs to be accountable and auditable. It needs to be instructed and learn what information it can retrieve. Combining it with IA serves as the linchpin of effective data governance, enhancing the accuracy, security, and accountability of data throughout its lifecycle. Put simply, by wrapping Gen AI with IA businesses have greater control of data and automated workflows, managing how it is processed, secured – from unauthorized changes – and stored. It is this ‘process wrapper’ concept that will allow organizations to deploy Gen AI effectively and responsibly. Adoption and transparency of Gen AI – now – is imperative, as innovation continues to grow at pace. The past 12 months have seen significant innovations in language learning models (LLMs) and Gen AI to simplify automations that tackle complex and hard-to-automate processes. ... Before implementing any sort of new automation technology, organizations must establish use cases unique to their business and undertake risk management assessments to avoid potential noncompliance, data breaches and other serious issues.


Third-party software supply chain threats continue to plague CISOs

As software gets more complex with more dependent components, it quickly becomes difficult to detect coding errors, whether they are inadvertent or added for malicious purposes as attackers try to hide their malware. “A smart attacker would just make their attack look like an inadvertent vulnerability, thereby creating extremely plausible deniability,” Williams says. ... “No single developer should be able to check in code without another developer reviewing and approving the changes,” the agency wrote in their report. This was one of the problems with the XZ Utils compromise, where a single developer gained the trust of the team and was able to make modifications on their own. One method is to combine a traditional third-party risk management program with specialized consultants that can seek out and eliminate these vulnerabilities, such as the joint effort by PwC and ReversingLabs’ automated tools. The open-source community also isn’t just standing still. One solution is a tool introduced earlier this month by the Open Source Security Foundation called Siren. 


Who is looking out for your data? Security in an era of wide-spread breaches

Beyond organizations introducing the technology behind closed doors to keep data safe, the interest in biometrics smartcards shows that consumers also want to see improved protection play out in their physical transactions and finance management. This paradigm shift reflects not only a desire for heightened protection but also an acknowledgement of the limitations of traditional authentication methods. Attributing access to a fingerprint or facial recognition affirms to that person, in that moment, that their credentials are unique, and therefore that the data inside is safe. Encryption of fingerprint data within the card itself further ensures complete confidence in the solution. The encryption of personal identity data only strengthens this defense, ensuring that sensitive information remains inaccessible to unauthorized parties. These smartcards effectively mitigate the vulnerabilities associated with centralized databases. Biometric smart cards also change the dynamic of data storage. Rather than housing biometric credentials in centralized databases, where targets are also gathered in one location; smartcards sidestep that risk.


The Role of AI in Developing Green Data Centers

Green data centers, powered by AI technologies, are at the forefront of revolutionizing the digital infrastructure landscape with their significantly reduced environmental impact. These advanced facilities leverage AI to optimize energy consumption and cooling systems, leading to a substantial reduction in energy consumption and carbon footprint. This not only reduces greenhouse gas emissions but also paves the way for more sustainable operational practices within the IT industry. Furthermore, sustainability initiatives integral to green data centers extend beyond energy efficiency. They encompass the use of renewable energy sources such as wind, solar, and hydroelectric power to further diminish the reliance on fossil fuels. ... AI-driven solutions can continuously monitor and analyze vast amounts of data regarding a data center’s operational parameters, including temperature fluctuations, server loads, and cooling system performance. By leveraging predictive analytics and machine learning algorithms, AI can anticipate potential inefficiencies or malfunctions before they escalate into more significant issues that could lead to excessive power use.


Don't Expect Cybersecurity 'Magic' From GPT-4o, Experts Warn

Despite the fresh capabilities, don't expect the model to fundamentally change how a gen AI tool helps either attackers or defenders, said cybersecurity expert Jeff Williams. "We already have imperfect attackers and defenders. What we lack is visibility into our technology and processes to make better judgments," Williams, the CTO at Contrast Security, told Information Security Media Group. "GPT-4o has the exact same problem. So it will hallucinate non-existent vulnerabilities and attacks as well as blithely ignore real ones." ... Attackers might still gain some minor productivity boosts thanks to GPT-4o's fresh capabilities, including its ability to do multiple things at once, said Daniel Kang, a machine learning research scientist who has published several papers on the cybersecurity risks posed by GPT-4. These "multimodal" capabilities could be a boon to attackers who want to craft realistic-looking deep fakes that combine audio and video, he said. The ability to clone voices is one of GPT-4o's new features, although other gen AI models already offered this capability, which experts said can potentially be used to commit fraud by impersonating someone else's identify.



Quote for the day:

"Defeat is not bitter unless you swallow it." -- Joe Clark

Daily Tech Digest - May 27, 2024

10 big devops mistakes and how to avoid them

“One of the significant challenges with devops is ensuring seamless communication and collaboration between development and operations teams,” says Lawrence Guyot, president of IT services provider Empowerment through Technology & Education (ETTE). ... Ensuring the security of the software supply chain in a devops environment can be challenging. “The speed at which devops teams operate can sometimes overlook essential security checks,” Guyot says. “At ETTE, we addressed this by integrating automated security tools directly into our CI/CD pipeline, conducting real-time security assessments at every stage of development.” This integration not only helped the firm identify vulnerabilities early, but also ensured that security practices kept pace with rapid deployment cycles, Guyot says. ... “Aligning devops with business goals can be quite the hurdle,” says Remon Elsayea, president of TechTrone IT Services, an IT solutions provider for small and mid-sized businesses. “It often seems like the rapid pace of devops initiatives can outstrip the alignment with broader business objectives, leading to misaligned priorities,” Elsayea says.


Why We Need to Get a Handle on AI

A recent World Economic Forum report also found a widening cyber inequity, which is accelerating the profound impact of emerging technologies. The path forward therefore demands strategic thinking, concerted action, and a steadfast commitment to cyber resilience. Again, this isn’t new. Organizations of all sizes and maturity levels have often struggled to maintain the central tenets of organizational cyber resilience. At the end of the day, it is much easier to use technology to create malicious attacks than it is to use technology to detect such a wide spectrum of potential attack vectors and vulnerabilities. The modern attack surface is vast and can overwhelm an organization as they determine how to secure it. With this increased complexity and proliferation of new devices and attack vectors, people and organizations have become a bigger vulnerability than ever before. It is often said that humans are the biggest risk when it comes to security and deepfakes can more easily trick people into taking actions that benefit the attackers. Therefore, what questions should security teams be asking to protect their organization?


Demystifying cross-border data transfer compliance for Indian enterprises

The variability of these laws introduces complex compliance issues. As Indian enterprises expand globally, the significance of robust data compliance management escalates. Organizations like ours assist companies worldwide with customized solutions tailored to the complexities of cross-border data transfer compliance. We ensure that businesses not only meet international data protection standards but also enhance their data governance practices through our comprehensive suite of tools. The evolution of India’s data localization policies could significantly influence global digital diplomacy. Moving from strict data localization to permitting certain cross-border data flows aligns India more closely with global digital trade norms, potentially enhancing its relationships with major markets like the US and EU. India is proactively revising its legal frameworks to better address the intricacies of cross-border data transfers within the realm of data privacy, especially for businesses. The forthcoming DPDPA regulations aim to balance the need for data protection with the operational requirements of digital commerce and governance.


Digital ID adoption: Implementation and security concerns

Digital IDs are poised to revolutionize sectors that rely heavily on secure and efficient identity verification. ... “As the Forrester experts note in the study, the complexities and disparities of global implementation across various landscapes highlight the strategic necessity of adopting a hybrid approach to digital IDs. Moreover, there is no single, universally accepted set of global standards for digital IDs that applies across all countries and sectors. Therefore, the large number of companies at the stage of active implementation demonstrates a growing need for frameworks and guidelines that aim to foster interoperability, security, and privacy across different digital ID systems,” said Ihar Kliashchou, CTO at Regula. “The good news is that several international organizations and standards bodies — New Technology Working Group in the International Civil Aviation Organization, the International Organization for Standardization (ISO), etc. — are working towards those standards. This seems to be a case in which slow and steady wins the race,” concluded Kliashchou.


Forrester: Preparing for the era of the AI PC

AI PCs are now disrupting the cloud-only AI model to bring that processing to local devices running any OS. But what is an AI PC exactly? Forrester defines an AI PC as a PC embedded with an AI chip and algorithms specifically designed to improve the experience of AI workloads across the computer processing unit (CPU), graphics processing unit (GPU) and neural processing unit (NPU). ... An AI PC also offers a way to improve the collaboration experience. Dedicated AI chipsets will improve the performance of classic collaboration features, such as background blur and noise, by sharing resources across CPUs, GPUs and NPUs. On-device AI offers the ability to render a much finer distinction between the subject and the blurred background. More importantly, the AI PC will also enable new use cases, such as eye contact correction, portrait blur, auto framing, lighting adjustment and digital avatars. Another benefit of AI chipsets on PCs is that they provide the means to optimise device performance and longevity. Previous AI use cases were feasible on PCs, but they drained the battery quickly. The addition of an NPU will help preserve battery life while employees run sustained AI workloads.


Gartner Reveals 5 Trends That Will Make Software Engineer

Herschmann said that while there is a worry that AI could eliminate coding jobs instead of just enhancing them, that worry is somewhat unfounded. "If anything, we believe there's going to be a need for more developers, which may at first seem a little counterintuitive, but the reality is that we're still in the early stages of all of this," he said. "While generative AI is quite impressive in the beginning, if you dig a little bit deeper, you realize it's shinier than it really is," Herschmann said. So instead of replacing developers, AI will be more of a partner to them. ... Coding is just a small part of a developer's role. There are a lot of other things they need to do, such as keep the environment running, configuration work, and so on. So it makes sense to have a platform engineering team to take some of this work off developers' plates so they can focus on building the product, according to Herschmann. "Along with that though comes a potential scaling effect because you can then provide that same environment and the skills of that team to others as you scale up," he said. 


Beyond blockchain: Unlocking the potential of Directed Acyclic Graphs (DAGs)

DAGs are a type of data structure that uses a topological ordering, allowing for multiple branches that converge but do not loop back on themselves. Imagine a network of interconnected highways where each transaction can follow its own distinct course, branching off and joining forces with other transactions as required. This structure enables simultaneous transactions, eliminating the need for sequential processing, which is a bottleneck in traditional blockchain systems. ... One of the notable challenges of traditional blockchain technology is its scalability. DAGs address this issue by allowing more transactions to be processed in parallel, significantly increasing throughput, a key advantage for real-time applications in commodity trading and supply chain management. DAGs are more energy-efficient than proof-of-work blockchains, as they do not require substantial computational power for intensive mining activities, aligning with global and particularly India’s increasing focus on sustainable technological solutions. But the benefits of DAGs don’t stop here. Imagine a scenario where a shipment of perishable goods is delayed due to unforeseen circumstances, such as adverse weather conditions.


Pioneering the future of personalised experiences and data privacy in the digital age

Zero-party data (ZPD) is at the core of Affinidi's strategy and is crucial for businesses navigating consumer interactions. ZPD refers to information consumers willingly share with companies for specific benefits, such as personalised offers and services. Consider an avid traveller who frequently books trips online. He might share his travel preferences with a travel company, such as favourite destinations, preferred accommodation types, and activity interests. This data allows the company to tailor its offerings precisely to his tastes. For instance, if he loves beach destinations and luxury hotels, the company can send him personalised travel packages featuring exclusive beach resorts with premium amenities. ... As data privacy regulations tighten, businesses must prioritise consented and accurate data sources, reducing legal risks and dependence on external data pools. Trust can be viewed as a currency, altering customers' loyalty and buying decisions. A survey by PWC showed that 33% of customers pay a premium to companies because they trust them. 


Shut the back door: Understanding prompt injection and minimizing risk

You don’t have to be an expert hacker to attempt to misuse an AI agent; you can just try different prompts and see how the system responds. Some of the simplest forms of prompt injection are when users attempt to convince the AI to bypass content restrictions or ignore controls. This is called “jailbreaking.” One of the most famous examples of this came back in 2016, when Microsoft released a prototype Twitter bot that quickly “learned” how to spew racist and sexist comments. More recently, Microsoft Bing (now “Microsoft Co-Pilot) was successfully manipulated into giving away confidential data about its construction. Other threats include data extraction, where users seek to trick the AI into revealing confidential information. Imagine an AI banking support agent that is convinced to give out sensitive customer financial information, or an HR bot that shares employee salary data. And now that AI is being asked to play an increasingly large role in customer service and sales functions, another challenge is emerging. Users may be able to persuade the AI to give out massive discounts or inappropriate refunds. 


Say goodbye to break-and-fix patches

A ‘break-and-fix’ mindset can be necessary in emergency situations, but it can also make things worse. While it can be tempting to view maintenance work as adding little value, failing to address these problems properly will only create future issues as you accumulate tech debt. Fixing those issues will require more resources — time, money, skills — that will undoubtedly hurt your organization. ... Tech debt is one of those “invisible issues” hiding in IT systems. Opting for quick fixes to solve immediate issues, rather than undertaking comprehensive upgrades might seem cost-effective and straightforward at first. However, over time, the accumulation of these patches contributes significantly to tech debt. ... Despite the potential consequences of inadequate and reactive maintenance, adopting a more proactive approach can be challenging for many businesses. Economic pressures and budgetary constraints are forcing leaders to reduce expenses and ‘do more with less’ — this leads to situations where areas not traditionally viewed as value-adding (like maintenance) are deprioritized. This is where managed services can help. 



Quote for the day:

''Smart leaders develop people who develop others, don't waste your time on those who won't help themselves.'' -- John C Maxwell

Daily Tech Digest - May 26, 2024

The modern CISO: Scapegoat or value creator?

To showcase the value of their programs and demonstrate effectiveness, CISOs must establish clear communication and overcome the disconnect between the board and their team. It’s up to the CISO to ensure the board understands the level of cyber risk their organization is facing and what they need to increase the cyber resilience of their organization. Presenting cyber risk levels in monetary terms with actionable next steps is necessary to bring the board of directors on the same page and open an honest line of communication, while elevating their cybersecurity team to the role of value creator. ... CISOs are deeply wary about sharing too many details on their cybersecurity posture in the public domain, because of the unnecessary and preventable risk of exposing their organizations to cyberattacks, which are expected to cause $10.5 trillion in damages by 2025. Filing an honest 10K while preserving your organization’s cyber defenses requires a delicate balance. We’ve already seen Clorox fall victim when the balance was off. ... Given the pace at which the cybersecurity landscape is continuing to evolve, the CISO’s job is getting tougher. 


This Week in AI: OpenAI and publishers are partners of convenience

In an appearance on the “All-In” podcast, Altman said that he “definitely [doesn’t] think there will be an arms race for [training] data” because “when models get smart enough, at some point, it shouldn’t be about more data — at least not for training.” Elsewhere, he told MIT Technology Review’s James O’Donnell that he’s “optimistic” that OpenAI — and/or the broader AI industry — will “figure a way out of [needing] more and more training data.” Models aren’t that “smart” yet, leading OpenAI to reportedly experiment with synthetic training data and scour the far reaches of the web — and YouTube — for organic sources. But let’s assume they one day don’t need much additional data to improve by leaps and bounds. ... Through licensing deals, OpenAI effectively neutralizes a legal threat — at least until the courts determine how fair use applies in the context of AI training — and gets to celebrate a PR win. Publishers get much-needed capital. And the work on AI that might gravely harm those publishers continues.


Private equity looks to the CIO as value multiplier

A newer way of thinking about value creation focuses on IT, he says, because nearly every company, perhaps even the mom-and-pop coffee shop down the street, is a heavy IT user. “With this third wave, we’re seeing private equity firms retain in-house IT leadership, and that in-house IT leadership has led to more value creation,” Buccola says. “Firms with great IT leadership, a sound IT strategy, and a forward-thinking IT strategy, are creating more value.” ... “All roads lead to IT,” says Corrigan, a veteran of PE-backed firms, with World Insurance backed by Goldman Sachs and Charlesbank. “Every aspect of the business is dependent on some type of technology.” Corrigan sees CIOs being more frequently consulted when PE-back firms look to IT systems to drive operational efficiencies. In some cases, cutting costs is a quicker path to return on investment than revenue growth. “Every dollar you can cut out of the bottom line is worth several dollars of revenue generated,” he says. ... “The modern CIO in a private equity environment is no longer just a back-office role but a strategic partner capable of driving the business forward,” he says.


Sad Truth Is, Bad Tests Are the Norm!

When it comes to testing, many people seem to have the world view that hard-to-maintain tests are the norm and acceptable. In my experience, the major culprits are BDD frameworks that are based on text feature files. This is amplifying waste. The extra feature file layer in theory allows;The user to swap out the language at a later date; Allows a business person to write user stories and or acceptance criteria; Allows a business person to read the user stories and or acceptance criteria; Collaboration; Etc… You have actually added more complexity than you think, for little benefit. I am explicitly critiquing the approach of writing the extra feature file layer first, not the benefits of BDD as a concept. You test more efficiently, with better results not writing the feature file layer, such as with Smart BDD, where it’s generated by code. Here I compare the complexities and differences between Cucumber and Smart BDD. ... Culture is hugely important, I’m sure we and our bosses and senior leaders would all ultimately agree with the following:For more value, you need more feedback and less waste; For more feedback, you need more value and less waste; For less waste, you need more value and more feedback


6 Months Under the SEC’s Cybersecurity Disclosure Rules

There have been calls for regulatory harmonization. For example, the Biden-Harris Administration’s National Cybersecurity Strategy released last year calls for harmonization and streamlining of new and existing regulations to ease the burden of compliance. But in the meantime, enterprise leadership teams must operate in this complicated regulatory landscape, made only more complicated by budgetary issues. “Security budgets aren't growing for the most part. So, there's this tension between diverting resources to security versus diverting resources to compliance … on top of everything else that the CISOs have going on,” says Algeier. So, what should CISOs and enterprise leadership teams be doing as they continue to work under these SEC rules and other regulatory obligations? “CISOs should keep in mind the ability to quickly, easily, and efficiently fulfill the requirements laid out by the SEC, especially if they were to fall victim to an attack,” says Das. “This means having not only the right processes in place, but investments into tools that can ensure reporting occurs in the newly condensed timeline.”


Despite increased budgets, organizations struggle with compliance

“While regulations are driving strategy shifts and increased budgets, the talent shortage and fragmented infrastructure remain obstacles to compliance and resilience. To succeed, organizations must find the right balance between human expertise for complex situations and AI-enhanced automation tools for routine tasks. This will alleviate operational strain and ensure security professionals can focus on the parts of the job where human judgment is irreplaceable.” ... 93% of organizations report rethinking their cybersecurity strategy in the past year due to the rise of new regulations, with 58% stating they have completely reconsidered their approach. The strategy shifts are also impacting the roles of cybersecurity decision-makers, with 45% citing significant new responsibilities. 92% of organizations reported an increase in their allocated budgets. Among these organizations, a significant portion (36%) witnessed budget increases of 20% to 49%, and a notable 23% saw increases exceeding 50%. 


Fundamentals of Dimensional Data Modeling

Dimensional modeling focuses its diagramming on facts and dimensions:Facts contain crucial quantitative data to track business processes. Examples of these metrics include sales figures or number of subscriptions. Dimensions contain referential pieces of information. Examples of dimensions include customer name, price, date, or location. Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. ... Dimensional data modeling promises quick access to business insights when searching a data warehouse. Modelers provide a template to guide business conversations across various teams by selecting the business process, defining the grain, and identifying the dimensions and fact tables. Alignment in the design requires these processes, and Data Governance plays an integral role in getting there. 


Why the AI Revolution Is Being Led from Below

If shadow IT was largely defined by some teams’ use of unauthorized vendors and platforms, shadow AI is often driven by the use of AI tools like ChatGPT by individual employees and users, on their own and even surreptitiously. ... So why is that a problem? The proliferation of Shadow AI can deliver many of the same benefits as officially sanctioned AI strategies, streamlining processes, automating repetitive tasks, and enhancing productivity. Employees are mainly drawn to deploy their own AI tools for precisely these reasons — they can hand off chunks of taxing work to these invisible assistants. Some industry observers see the plus side of all this and are actively encouraging the “democratization” of AI tools. At this week’s The Financial Brand Forum 2024, Cornerstone Advisors’ Ron Shevlin made it his top recommendation: “My #1 piece of advice is ‘drive bottom-up use.’ Encourage widespread AI experimentation by your team members. Then document and share the process and output improvements as widely as possible.”


A Strategic Approach to Stopping SIM Swap Fraud

Fraudsters are cautious about their return on investment. SIM swap fraud is a high-risk endeavor, and they typically expect higher rewards. It involves the risk of physically visiting telco operator premises, obtaining genuine looking customer identification documents, using employees' mules, or bribing bank or telco staff. Their targets are mostly high-balance accounts, including both bank accounts and wallets. Over the years, we have learned that customers with substantial account balances might often share bank details and OTPs during social engineering schemes, but they typically refrain from sharing their PIN due to the perceived risk involved. Even if a small percentage of customers were to share their PIN, the risk would still be minimized, as the majority of potential victims would refrain from sharing their PIN. The fraudsters would need to compromise at three levels instead of two: data gathering, compromising the telco operator and persuading the customer. If customers detect something suspicious, they may become alert, resulting in fraudsters wasting their investments.


Complexity snarls multicloud network management

While each cloud provider does its best to make networking simple across clouds, all have very nuanced differences and varied best practices for approaching the same problem, says Ed Wood, global enterprise network lead at business advisory firm Accenture. This makes being able to create enterprise-ready, secured networks across the cloud challenging, he adds. Wasim believes that a lack of intelligent data utilization at crucial stages, from data ingestion to proactive management, further complicates the process. “The sheer scale of managing resources, coupled with the dynamic nature of cloud environments, makes it challenging to achieve optimal performance and efficiency.” Making network management even more challenging is a lack of clarity on roles and responsibilities. This can be attributed to an absence of agreement on shared responsibility models, Wasim says. As a result, stakeholders, including customers, cloud service providers, and any involved third parties, might each hold different views on responsibility and accountability regarding data compliance, controls, and cloud operations management.



Quote for the day:

"You may be disappointed if you fail, but you are doomed if you don't try." -- Beverly Sills

Daily Tech Digest - May 22, 2024

Guide to Kubernetes Security Posture Management (KSPM)

Bad security posture impacts your ability to respond to new and emerging threats because of extra “strain” on your security capabilities caused by misconfigurations, gaps in tooling, or inadequate training. ... GitOps manages all cluster changes via Configuration as Code (CaC) in Git, eliminating manual cluster modifications. This approach aligns with the Principle of Least Privilege and offers benefits beyond security. GitOps ensures deployment predictability, stability and admin awareness of the cluster’s state, preventing configuration drift and maintaining consistency across test and production clusters. Additionally, it reduces the number of users with write access, enhancing security. ... Human log analysis is crucial for retrospectively reviewing security incidents. However, real-time monitoring and correlation are essential for detecting incidents initially. While manual methods like SIEM solutions with dashboards and alerts can be effective, they require significant time and effort to extract relevant data. 


Where’s the ROI for AI? CIOs struggle to find it

The AI market is still developing, and some companies are adopting the technology without a specific use case in mind, he adds. Kane has seen companies roll out Microsoft Copilot, for example, without any employee training about its uses. ... “I have found very few companies who have found ROI with AI at all thus far,” he adds. “Most companies are simply playing with the novelty of AI still.” The concern about calculating the ROI also rings true to Stuart King, CTO of cybersecurity consulting firm AnzenSage and developer of an AI-powered risk assessment tool for industrial facilities. With the recent red-hot hype over AI, many IT leaders are adopting the technology before they know what to do with it, he says. “I think back to the first discussions that we had within the organizations that are working with, and it was a case of, ‘Here’s this great new thing that we can use now, let’s go out and find a use for it,’” he says. “What you really want to be doing is finding a problem to solve with it first.” As a developer who has integrated AI into his own software, King is not an AI skeptic. 


100 Groups Urge Feds to Put UHG on Hook for Breach Notices

Some experts advise HIPAA-regulated entities that are likely affected by a Change Healthcare breach to take precautionary measures now to prepare for their potential notification duties involving a compromise of their patients' PHI. ... HIPAA-regulated Change Healthcare customers also have an obligation under HIPAA to perform "reasonable diligence" to investigate and obtain information about the incident to determine whether the incident triggers notice obligations to their patients or members, said attorney Sara Goldstein of law firm BakerHostetler. Reasonable diligence includes Change Healthcare customers frequently checking UHG and Optum's websites for updates on the restoration and data analysis process, contacting their Change Healthcare account representative on a regular basis to see if there are any updates specific to their organization, and engaging outside privacy counsel to submit a request for information directly to UnitedHealth Group to obtain further information about the incident, Goldstein said.


‘Innovation Theater’ in Banking Gives Way to a More Realistic and Productive Function

The conservative approach many institutions are taking to GenAI reflects that reality. Buy Now, Pay Later meanwhile makes a great example of how exciting new innovations can unexpectedly reveal a dark side. ... In many institutions, innovation has become less about pure invention and more about applying what’s out there already in new ways and combinations to solve common problems. Doing so doesn’t necessarily require geniuses, but you do need highly specialized “plumbers” who can link together multiple technologies in smart ways. Even the regulatory view has evolved. There was a time when federal regulators held open doors to innovation, even to the extent of offering “sandboxes” to let innovations sprout without weighing them down initially with compliance burdens. But the Consumer Financial Protection Bureau, under the Biden administration, did away with its sandbox early on. Washington today walks a more cautious line on innovation, and that line could veer. The bottom line? Innovators who take their jobs, and the impact of their jobs, seriously, realize that banking innovation must grow up.


AI glasses + multimodal AI = a massive new industry

Both OpenAI and Google demos clearly reveal a future where, thanks to the video mode in multimodal AI, we’ll be able to show AI something, or a room full of somethings, and engage with a chatbot to help us know, process, remember or understand. It would be all very natural, except for one awkward element. All this holding and waving around of phones to show it what we want it so “see” is completely unnatural. Obviously — obviously! — video-enabled multimodal AI is headed for face computers, a.k.a. AI glasses. And, in fact, one of the most intriguing elements of the Google demo was that during a video demonstration, the demonstrator asked Astra-enhanced Gemini if it remembered where her glasses were, and it directed her back to a table, where she picked up the glasses and put them on. At that point, the glasses — which were prototype AI glasses — seamlessly took over the chat session from the phone (the whole thing was surely still running on the phone, with the glasses providing the camera, microphones and so on).
 

Technological complexity drives new wave of identity risks

The concept zero standing privilege (ZSP) requires that a user only be granted the minimum levels of access and privilege needed to complete a task, and only for a limited amount of time. Should an attacker gain entry to a user’s account, ZSP ensures there is far less potential for attackers to access sensitive data and systems. The study found that 93% of security leaders believe ZSP is effective at reducing access risks within their organization. Additionally, 91% reported that ZSP is being enforced across at least some of their company’s systems. As security leaders face greater complexity across their organizations’ systems and escalating attacks from adversaries, it’s no surprise that risk reduction was cited as respondents’ top priority for identity and access management (55%). This was followed by improving team productivity (50%) and automating processes (47%). Interestingly, improving user experience was cited as the top priority among respondents who experienced multiple instances of attacks or breaches due to improper access in the last year.


The Legal Issues to Consider When Adopting AI

Different types of data bring different issues of consent and liability. For example, consider whether your data is personally identifiable information, synthetic content (typically generated by another AI system), or someone else’s intellectual property. Data minimization—using only what you need—is a good principle to apply at this stage. Pay careful attention to how you obtained the data. OpenAI has been sued for scraping personal data to train its algorithms. And, as explained below, data-scraping can raise questions of copyright infringement. ... Companies also need to consider the potential forinadvertent leakage of confidential and trade-secret information by an AI product. If allowing employees to internally use technologies such as ChatGPT (for text) and Github Copilot (for code generation), companies should note that such generative AI tools often take user prompts and outputs as training data to further improve their models. Luckily, generative AI companies typically offer more secure services and the ability to opt out of model training.


How innovative power sourcing can propel data centers toward sustainability

The increasing adoption of Generative AI technologies over the past few years has placed unprecedented energy demands on data centers, coinciding with a global energy emergency exacerbated by geopolitical crises. Electricity prices have since reached record highs in certain markets, while oil prices soared to their highest level in over 15 years. Volatile energy markets have awakened a need in the general population to become more flexible in their energy use. At the same time, the trends present an opportunity for the data center sector to get ahead of the game. By becoming managers of energy, as opposed to just consumers, market players can find more efficient and cost-effective ways to source power. Innovative renewable options present a highly attractive avenue in this regard. As a result, data center providers are working more collaboratively with the energy sector for solutions. And for them, it’s increasingly likely that optimizing efficiency won’t be just about being close to the grid, but also about being close to the power-generation site – or even generating and storing power on-site.


Google DeepMind Introduces the Frontier Safety Framework

Existing protocols for AI safety focus on mitigating risks from existing AI systems. Some of these methods include alignment research, which trains models to act within human values, and implementing responsible AI practices to manage immediate threats. However, these approaches are mainly reactive and address present-day risks, without accounting for the potential future risks from more advanced AI capabilities. In contrast, the Frontier Safety Framework is a proactive set of protocols designed to identify and mitigate future risks from advanced AI models. The framework is exploratory and intended to evolve as more is learned about AI risks and evaluations. It focuses on severe risks resulting from powerful capabilities at the model level, such as exceptional agency or sophisticated cyber capabilities. The Framework aims to align with existing research and Google’s suite of AI responsibility and safety practices, providing a comprehensive approach to preventing any potential threats.


Proof-of-concept quantum repeaters bring quantum networks a big step closer

There are two main near-term use cases for quantum networks. The first use case is to transmit encryption keys. The idea is that public key encryption – the type currently used to secure Internet traffic – could soon be broken by quantum computers. Symmetrical encryption – where the same key is used to both encrypt and decrypt messages – is more future proof, but you need a way to get that key to the other party. ... Today, however, the encryption we currently have is good enough, and there’s no immediate need for companies to look for secure quantum networks. Plus, there’s progress already being made on creating quantum-proof encryption algorithms. The other use for quantum networks is to connect quantum computers. Since quantum networks transmit entangled photons, the computers so connected would also be entangled, theoretically allowing for the creation of clustered quantum computers that act as a single machine. “There are ideas for how to take quantum repeaters and parallelize them to provide very high connectivity between quantum computers,” says Oskar Painter, director of quantum hardware at AWS. 



Quote for the day:

"Many of life’s failures are people who did not realize how close they were to success when they gave up." -- Thomas Edison

Daily Tech Digest - May 21, 2024

Most Software Engineers Know Nothing About Hardware

While most software engineers would want to believe that there is not a need for them to know the intricacies of hardware, as long as what they are using offers support for the software they want to use and build. But on the contrary, a user offered a thought-provoking take, suggesting that understanding hardware could bolster several fields, such as cybersecurity. “I think it would help in programming to know how the chip and memory think only to secure the program from hackers,” he said. This highlights a practical benefit of hardware knowledge that goes beyond mere academic interest. Moreover, software engineers who know a thing or two about hardware can create better softwares and build good software capability on the hardware. This perspective suggests that a deeper understanding of hardware can lead to more efficient and innovative software solutions. The roles of software engineers are also changing with the advent of AI tools. For over a decade, a popular belief has been that a computer science degree is all you need to tread the path to wealth, especially in a country like India. 


Network teams are ready to switch tool vendors

For a variety of reasons, network management tools have historically been sticky in IT organizations. First, tool vendors sold them with perpetual licenses, which meant a long-term investment. Second, tools could take time to implement, especially for larger companies that invest months of time customizing data collection mechanisms, dashboards, alerts, and more. Also, many tools were difficult to use, so they came with a learning curve. But things have changed. Most network management tools are now available as SaaS solutions with a subscription license. Many vendors have developed new automation features and AI-driven features that reduce the amount of customization that some IT organizations will need to do. ... For all these reasons, many IT organizations feel less locked into their network management tools today. Still, it’s important to note that replacing tools remains challenging. In fact, network teams that struggle to hire and retain skilled personnel are less likely to replace a tool. They don’t have the capacity to tackle such a project because they’re barely keeping up with day-to-day operations. Larger enterprises, which have larger and more complex networks, were also less open to new tools.


Reducing CIO-CISO tension requires recognizing the signs

In the case of highly critical vulnerabilities that have been exploited, the CISO will want patches applied immediately, and the CIO is likely aligned with this urgency. But for medium-level patches, the CIO may be under pressure to defer these disruptions to production systems, and may push back on the CISO to wait a week or even months before patching. ... Incident management is another are ripe for tension. The CISO has a leadership role to play when there is a serious cyber or business disruption incident, and is often the“messenger” that shares the bad news. Naturally, the CIO wants to be immediately informed, but often the details are sparse with many unknowns. This can make the CISO look bad to the CIO, as there are often more questions than answers at this early stage. ... A fifth example is DevOps, as many CIOs, including myself, advocate for continuous delivery at velocity. Unfortunately, not as many CIOs advocate for DevSecOps to embed cybersecurity testing in the process. This is perhaps because the CIO is often under pressure from executive stakeholders to release new software builds and thus accept the risk that there may be some iteration required if this is not perfect.


Strategies for combating AI-enhanced BEC attacks

In addition to employee training and a zero-trust approach, companies should leverage continuous monitoring and risk-based access decisions. Security teams can use advanced analytics to monitor user activity and identify anomalies that might indicate suspicious behavior. Additionally, zero trust allows for implementing risk-based access controls – for example, access from an unrecognized location might trigger a stronger authentication challenge or require additional approval before granting access. Security teams can also use network segmentation to contain threats. This involves dividing the network into smaller compartments. So, even if attackers manage to breach one section, their movement is restricted, preventing them from compromising the entire network. ... Building a robust defense against BEC attacks requires a layered approach. Comprehensive security strategies that leverage zero trust are a must. However, they can’t do all the heavy lifting alone. Businesses must also empower their employees to make the right decisions by investing in security awareness training that incorporates real-world scenarios and teaches employees how to identify and report suspicious activities.


From sci-fi to reality: The dawn of emotionally intelligent AI

Greater ability to integrate audio, visual and textual data opens potentially transformative opportunities in sectors like healthcare, where it could lead to more nuanced patient interaction and personalized care plans. ... As GPT-4o and similar offerings continue to evolve, we can anticipate more sophisticated forms of natural language understanding and emotional intelligence. This could lead to AI that not only understands complex human emotions but also responds in increasingly appropriate and helpful ways. The future might see AI becoming an integral part of emotional support networks, providing companionship and aid that feels genuinely empathetic and informed. The journey of AI from niche technology to a fundamental part of our daily interactions is both exhilarating and daunting. To navigate this AI revolution responsibly, it is essential for developers, users and policymakers to engage in a rigorous and ongoing dialogue about the ethical use of these technologies. As GPT-4o and similar AI tools become more embedded in our daily lives, we must navigate this transformative journey with wisdom and foresight, ensuring AI remains a tool that empowers rather than diminishes our humanity.


Unlocking DevOps Mastery: A Comprehensive Guide to Success

From code analysis and vulnerability scanning to access control and identity management, organizations must implement comprehensive security controls to mitigate risks throughout the software development lifecycle. Furthermore, compliance with industry standards and regulatory requirements must be baked into the DevOps process from the outset rather than treated as an afterthought. Moreover, organizations must be vigilant about ethical considerations and algorithmic bias in environments leveraging AI and machine learning, where the stakes are heightened. By embedding security and compliance into every stage of the DevOps pipeline, organizations can build trust and confidence among stakeholders and mitigate potential risks to their reputation and bottom line. DevSecOps, an extension of DevOps, emphasizes integrating security practices throughout the software development lifecycle (SDLC). Several key security practices and frameworks should be integrated into the DevOps program. 


Composable Enterprise: The Evolution of MACH and Jamstack

As the Jamstack and the MACH Architecture continue to evolve, categorizing the MACH architecture as “Jamstack for the enterprise” might not entirely be accurate, but it’s undeniable that the MACH approach has been gaining traction among vendors and has increasing appeal to enterprise customers. Demeny points out that the MACH Alliance recently celebrated passing the 100 certified member mark, and believes that the organization and the MACH architecture are entering a new phase. The MACH approach has been gaining traction among vendors and has increasing appeal to enterprise customers. “This also means that the audience profile of the MACH community and buyers is starting to shift a bit from developers to more business-focused stakeholders,” said Demeny. ”As a result, the Alliance is producing more work around interoperability understanding and standards in order to help these newer stakeholders understand and navigate the landscape.” Regardless of what tech stack developers and organizations choose, the evolution of the Jamstack and the MACH architecture are providing more options and flexibility for developers. 


The Three As of Building A+ Platforms: Acceleration, Autonomy, and Accountability

If the why is about creating value for the business, the what is all about driving velocity for your users, bringing delight to your users, and making your users awesome at what they do. This requires bringing a product mindset to building a platform. ... This is where I found it very useful to think in terms of the Double Diamond framework, where the first diamond is about product discovery and problem definition and the second is about building a solution. While in the first diamond you can do divergent thinking and ideation, either widely or deeply, the second diamond allows for action-oriented, focused thinking that converges into developing and delivering the solution. ... Platforms cannot be shaky - solid fundamentals (Reliability, Security, Privacy, Compliance, disruption) and operational excellence are tablestakes, not a nice-to-have. Our platforms have to be stable. In our case, we decided to put a stop to all feature delivery for about a quarter, did a methodical analysis of all the failures that led to the massive drop in deploy rates, and focused on crucial reliability efforts until we brought this metric back up to 99%+.


Training LLMs: Questions Rise Over AI Auto Opt-In by Vendors

"Organizations who use these technologies must be clear with their users about how their information will be processed," said John Edwards, Britain's Information Commissioner, in a speech last week at the New Scientist Emerging Technologies summit in London. "It's the only way that we continue to reap the benefits of AI and emerging technologies." Whether opting in users by default complies with GDPR remains an open question. "It's hard to think how an opt-out option can work for AI training data if personal data is involved," Armstrong said. "Unless the opt-out option is really prominent - for example, clear on-screen warnings; burying it in the terms and conditions won't be enough - that's unlikely to satisfy GDPR's transparency requirements." Clear answers remain potentially forthcoming. "Many privacy leaders have been grappling with questions around topics such as transparency, purpose limitation and grounds to process in relation to the use of personal data in the development and use of AI," said law firm Skadden, Arps, Slate, Meagher & Flom LLP, in a response to a request from the U.K. government to domestic regulators to detail their approach to AI. 


Data Owner vs. Data Steward: What’s the Difference?

Data owners (also called stakeholders) are often senior leaders or bosses within the organization, who have taken responsibility for managing the data in their specific department or business area. For instance, the director of marketing or the head of production are often data owners because the data used by their staff is critical to their operations. It is a position that requires both maturity and experience. Data owners are also responsible for implementing the security measures necessary for protecting the data they own – encryption, firewalls, access controls, etc. The data steward, on the other hand, is responsible for managing the organization’s overall Data Governance policies, monitoring compliance, and ensuring the data is of high quality. They also oversee the staff, as a form of the data police, to ensure they are following the guidelines that support high-quality data. ... Data stewards can offer valuable recommendations and insights to data owners, and vice versa. Regular meetings and collaboration between the data steward and data owners are necessary for successful Data Governance and management.



Quote for the day:

"Pursue one great decisive aim with force and determination." -- Carl Von Clause Witz