Showing posts with label employee engagement. Show all posts
Showing posts with label employee engagement. Show all posts

Daily Tech Digest - August 22, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


Leveraging DevOps to accelerate the delivery of intelligent and autonomous care solutions

Fast iteration and continuous delivery have become standard in industries like e-commerce and finance. Healthcare operates under different rules. Here, the consequences of technical missteps can directly affect care outcomes or compromise sensitive patient information. Even a small configuration error can delay a diagnosis or impact patient safety. That reality shifts how DevOps is applied. The focus is on building systems that behave consistently, meet compliance standards automatically, and support reliable care delivery at every step. ... In many healthcare environments, developers are held back by slow setup processes and multi-step approvals that make it harder to contribute code efficiently or with confidence. This often leads to slower cycles and fragmented focus. Modern DevOps platforms help by introducing prebuilt, compliant workflow templates, secure self-service provisioning for environments, and real-time, AI-supported code review tools. In one case, development teams streamlined dozens of custom scripts into a reusable pipeline that provisioned compliant environments automatically. The result was a noticeable reduction in setup time and greater consistency across projects. Building on this foundation, DevOps also play a vital role in development and deployment of the Machine Learning Models. 


Tackling the DevSecOps Gap in Software Understanding

The big idea in DevSecOps has always been this: shift security left, embed it early and often, and make it everyone’s responsibility. This makes DevSecOps the perfect context for addressing the software understanding gap. Why? Because the best time to capture visibility into your software’s inner workings isn’t after it’s shipped—it’s while it’s being built. ... Software Bill of Materials (SBOMs) are getting a lot of attention—and rightly so. They provide a machine-readable inventory of every component in a piece of software, down to the library level. SBOMs are a baseline requirement for software visibility, but they’re not the whole story. What we need is end-to-end traceability—from code to artifact to runtime. That includes:Component provenance: Where did this library come from, and who maintains it? Build pipelines: What tools and environments were used to compile the software? Deployment metadata: When and where was this version deployed, and under what conditions? ... Too often, the conversation around software security gets stuck on source code access. But as anyone in DevSecOps knows, access to source code alone doesn’t solve the visibility problem. You need insight into artifacts, pipelines, environment variables, configurations, and more. We’re talking about a whole-of-lifecycle approach—not a repo review.


Navigating the Legal Landscape of Generative AI: Risks for Tech Entrepreneurs

The legal framework governing generative AI is still evolving. As the technology continues to advance, the legal requirements will also change. Although the law is still playing catch-up with the technology, several jurisdictions have already implemented regulations specifically targeting AI, and others are considering similar laws. Businesses should stay informed about emerging regulations and adapt their practices accordingly. ... Several jurisdictions have already enacted laws that specifically govern the development and use of AI, and others are considering such legislation. These laws impose additional obligations on developers and users of generative AI, including with respect to permitted uses, transparency, impact assessments and prohibiting discrimination. ... In addition to AI-specific laws, traditional data privacy and security laws – including the EU General Data Protection Regulation (GDPR) and U.S. federal and state privacy laws – still govern the use of personal data in connection with generative AI. For example, under GDPR the use of personal data requires a lawful basis, such as consent or legitimate interest. In addition, many other data protection laws require companies to disclose how they use and disclose personal data, secure the data, conduct data protection impact assessments and facilitate individual rights, including the right to have certain data erased. 


Five ways OSINT helps financial institutions to fight money laundering

By drawing from public data sources available online, such as corporate registries and property ownership records, OSINT tools can provide investigators with a map of intricate corporate and criminal networks, helping them unmask UBOs. This means investigators can work more efficiently to uncover connections between people and companies that they otherwise might not have spotted. ... External intelligence can help analysts to monitor developments, so that newer forms of money laundering create fewer compliance headaches for firms. Some of the latest trends include money muling, where criminals harness channels like social media to recruit individuals to launder money through their bank accounts, and trade-based laundering, which allows bad actors to move funds across borders by exploiting international complexity. OSINT helps identify these emerging patterns, enabling earlier intervention and minimizing enforcement risks. ... When it comes to completing suspicious activity reports (SARs), many financial institutions rely on internal data, spending millions on transaction monitoring, for instance. While these investments are unquestionably necessary, external intelligence like OSINT is often neglected – despite it often being key to identifying bad actors and gaining a full picture of financial crime risk. 


The hard problem in data centres isn’t cooling or power – it’s people

Traditional infrastructure jobs no longer have the allure they once did, with Silicon Valley and startups capturing the imagination of young talent. Let’s be honest – it just isn’t seen as ‘sexy’ anymore. But while people dream about coding the next app, they forget someone has to build and maintain the physical networks that power everything. And that ‘someone’ is disappearing fast. Another factor is that the data centre sector hasn’t done a great job of telling its story. We’re seen as opaque, technical and behind closed doors. Most students don’t even know what a data centre is, and until something breaks  it doesn’t even register. That’s got to change. We need to reframe the narrative. Working in data centres isn’t about grey boxes and cabling. It’s about solving real-world problems that affect billions of people around the world, every single second of every day. ... Fixing the skills gap isn’t just about hiring more people. It’s about keeping the knowledge we already have in the industry and finding ways to pass it on. Right now, we’re on the verge of losing decades of expertise. Many of the engineers, designers and project leads who built today’s data centre infrastructure are approaching retirement. While projects operate at a huge scale and could appear exciting to new engineers, we also have inherent challenges that come with relatively new sectors. 


Multi-party computation is trending for digital ID privacy: Partisia explains why

The main idea is achieving fully decentralized data, even biometric information, giving individuals even more privacy. “We take their identity structure and we actually run the matching of the identity inside MPC,” he says. This means that neither Partisia nor the company that runs the structure has the full biometric information. They can match it without ever decrypting it, Bundgaard explains. Partisia says it’s getting close to this goal in its Japan experiment. The company has also been working on a similar goal of linking digital credentials to biometrics with U.S.-based Trust Stamp. But it is also developing other identity-related uses, such as proving age or other information. ... Multiparty computation protocols are closing that gap: Since all data is encrypted, no one learns anything they did not already know. Beyond protecting data, another advantage is that it still allows data analysts to run computations on encrypted data, according to Partisia. There may be another important role for this cryptographic technique when it comes to privacy. Blockchain and multiparty computation could potentially help lessen friction between European privacy standards, such as eIDAS and GDPR, and those of other countries. “I have one standard in Japan and I travel to Europe and there is a different standard,” says Bundgaard. 


MIT report misunderstood: Shadow AI economy booms while headlines cry failure

While headlines trumpet that “95% of generative AI pilots at companies are failing,” the report actually reveals something far more remarkable: the fastest and most successful enterprise technology adoption in corporate history is happening right under executives’ noses. ... The MIT researchers discovered what they call a “shadow AI economy” where workers use personal ChatGPT accounts, Claude subscriptions and other consumer tools to handle significant portions of their jobs. These employees aren’t just experimenting — they’re using AI “multiples times a day every day of their weekly workload,” the study found. ... Far from showing AI failure, the shadow economy reveals massive productivity gains that don’t appear in corporate metrics. Workers have solved integration challenges that stymie official initiatives, proving AI works when implemented correctly. “This shadow economy demonstrates that individuals can successfully cross the GenAI Divide when given access to flexible, responsive tools,” the report explains. Some companies have started paying attention: “Forward-thinking organizations are beginning to bridge this gap by learning from shadow usage and analyzing which personal tools deliver value before procuring enterprise alternatives.” The productivity gains are real and measurable, just hidden from traditional corporate accounting. 


The Price of Intelligence

Indirect prompt injection represents another significant vulnerability in LLMs. This phenomenon occurs when an LLM follows instructions embedded within the data rather than the user’s input. The implications of this vulnerability are far-reaching, potentially compromising data security, privacy, and the integrity of LLM-powered systems. At its core, indirect prompt injection exploits the LLM’s inability to consistently differentiate between content it should process passively (that is, data) and instructions it should follow. While LLMs have some inherent understanding of content boundaries based on their training, they are far from perfect. ... Jailbreaks represent another significant vulnerability in LLMs. This technique involves crafting user-controlled prompts that manipulate an LLM into violating its established guidelines, ethical constraints, or trained alignments. The implications of successful jailbreaks can potentially undermine the safety, reliability, and ethical use of AI systems. Intuitively, jailbreaks aim to narrow the gap between what the model is constrained to generate, because of factors such as alignment, and the full breadth of what it is technically able to produce. At their core, jailbreaks exploit the flexibility and contextual understanding capabilities of LLMs. While these models are typically designed with safeguards and ethical guidelines, their ability to adapt to various contexts and instructions can be turned against them.


The Strategic Transformation: When Bottom-Up Meets Top-Down Innovation

The most innovative organizations aren’t always purely top-down or bottom-up—they carefully orchestrate combinations of both. Strategic leadership provides direction and resources, while grassroots innovation offers practical insights and the capability to adapt rapidly. Chynoweth noted how strategic portfolio management helps companies “keep their investments in tech aligned to make sure they’re making the right investments.” The key is creating systems that can channel bottom-up innovations while ensuring they support the organization’s strategic objectives. Organizations that succeed in managing both top-down and bottom-up innovation typically have several characteristics. They establish clear strategic priorities from leadership while creating space for experimentation and adaptation. They implement systems for capturing and evaluating innovations regardless of their origin. And they create mechanisms for scaling successful pilots while maintaining strategic alignment. The future belongs to enterprises that can master this balance. Pure top-down enterprises will likely continue to struggle with implementation realities and changing market conditions. In contrast, pure bottom-up organizations would continue to lack the scale and coordination needed for significant impact.


Digital-first doesn’t mean disconnected for this CEO and founder

“Digital-first doesn’t mean disconnected – it means being intentional,” she said. For leaders it creates a culture where the people involved feel supported, wherever they’re working, she thinks. She adds that while many organisations found themselves in a situation where the pandemic forced them to establish a remote-first system, very few actually fully invested in making it work well. “High performance and innovation don’t happen in isolation,” said Feeney. “They happen when people feel connected, supported and inspired.” Sentiments which she explained are no longer nice to have, but are becoming a part of modern organisational infrastructure. One in which people are empowered to do their best work on their own terms. ... “One of the biggest challenges I have faced as a founder was learning to slow down, especially when eager to introduce innovation. Early on, I was keen to implement automation and technology, but I quickly realised that without reliable data and processes, these tools could not reach their full potential.” What she learned was, to do things correctly, you have to stop, review your foundations and processes and when you encounter an obstacle, deal with it, because though the stopping and starting might initially be frustrating, you can’t overestimate the importance of clean data, the right systems and personnel alignment with new tech.

Daily Tech Digest - March 17, 2025


Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones


Inching towards AGI: How reasoning and deep research are expanding AI from statistical prediction to structured problem-solving

There are various scenarios that could emerge from the near-term arrival of powerful AI. It is challenging and frightening that we do not really know how this will go. New York Times columnist Ezra Klein addressed this in a recent podcast: “We are rushing toward AGI without really understanding what that is or what that means.” For example, he claims there is little critical thinking or contingency planning going on around the implications and, for example, what this would truly mean for employment. Of course, there is another perspective on this uncertain future and lack of planning, as exemplified by Gary Marcus, who believes deep learning generally (and LLMs specifically) will not lead to AGI. Marcus issued what amounts to a take down of Klein’s position, citing notable shortcomings in current AI technology and suggesting it is just as likely that we are a long way from AGI. ... While each of these scenarios appears plausible, it is discomforting that we really do not know which are the most likely, especially since the timeline could be short. We can see early signs of each: AI-driven automation increasing productivity, misinformation that spreads at scale, eroding trust and concerns over disingenuous models that resist their guardrails. Each scenario would cause its own adaptations for individuals, businesses, governments and society.


AI in Network Observability: The Dawn of Network Intelligence

ML algorithms, trained on vast datasets of enriched, context-savvy network telemetry, can now detect anomalies in real-time, predict potential outages, foresee cost overruns, and even identify subtle performance degradations that would otherwise go unnoticed. Imagine an AI that can predict a spike in malicious traffic based on historical patterns and automatically trigger mitigations to block the attack and prevent disruption. That’s a straightforward example of the power of AI-driven observability, and it’s already possible today. But AI’s role isn’t limited to number crunching. GenAI is revolutionizing how we interact with network data. Natural language interfaces allow engineers to ask questions like: “What’s causing latency on the East Coast?” and receive concise, insightful answers. ... These aren’t your typical AI algorithms. Agentic AI systems possess a degree of autonomy, allowing them to make decisions and take actions within a defined framework. Think of them as digital network engineers, initially assisting with basic tasks but constantly learning and evolving, making them capable of handling routine assignments, troubleshooting fundamental issues, or optimizing network configurations.


Edge Computing and the Burgeoning IoT Security Threat

A majority of IoT devices come with wide-open default security settings. The IoT industry has been lax in setting and agreeing to device security standards. Additionally, many IoT vendors are small shops that are more interested in rushing their devices to market than in security standards. Another reason for the minimal security settings on IoT devices is that IoT device makers expect corporate IT teams to implement their own device settings. This occurs when IT professionals -- normally part of the networking staff -- manually configure each IoT device with security settings that conform with their enterprise security guidelines. ... Most IoT devices are not enterprise-grade. They might come with weak or outdated internal components that are vulnerable to security breaches or contain sub-components with malicious code. Because IoT devices are built to operate over various communication protocols, there is also an ever-present risk that they aren't upgraded for the latest protocol security. Given the large number of IoT devices from so many different sources, it's difficult to execute a security upgrade across all platforms. ... Part of the senior management education process should be gaining support from management for a centralized RFP process for any new IT, including edge computing and IoT. 


Data Quality Metrics Best Practices

While accuracy, consistency, and timeliness are key data quality metrics, the acceptable thresholds for these metrics to achieve passable data quality can vary from one organization to another, depending on their specific needs and use cases. There are a few other quality metrics, including integrity, relevance, validity, and usability. Depending on the data landscape and use cases, data teams can select the most appropriate quality dimensions to measure. ... Data quality metrics and data quality dimensions are closely related, but aren’t the same. The purpose, usage, and scope of both concepts vary too. Data quality dimensions are attributes or characteristics that define data quality. On the other hand, data quality metrics are values, percentages, or quantitative measurements of how well the data meets the above characteristics. A good analogy to explain the differences between data quality metrics and dimensions would be the following: Consider data quality dimensions as talking about a product’s attributes – it’s durable, long-lasting, or has a simple design. Then, data quality metrics would be how much it weighs, how long it lasts, and the like. ... Every solution starts with a problem. Identify the pressing concerns – missing records, data inconsistencies, format errors, or old records. What is it that you are trying to solve? 


How to Modernize Legacy Systems with Microservices Architectures

Scalability and agility are two significant benefits of a microservices architecture. With monolithic applications, it's difficult to isolate and scale distinct application functions under variable loads. Even if a monolithic application is scaled to meet increased demand, it could take months of time and capital to reach the end goal. By then, the demand might have changed —or disappear altogether — and the application will waste resources, bogging down the larger operating system. ... microservices architectures make applications more resilient. Because monolithic applications function on a single codebase, a single error during an update or maintenance can create large-scale problems. Microservices-based applications, however, work around this issue. Because each function runs on its own codebase, it's easier to isolate and fix problems without disrupting the rest of the application's services. ... Microservices might seem like a one-size-fits-all, no-downsides approach to modernizing legacy systems, but the first step to any major system migration is to understand the pros and cons. No major project comes without challenges, and migrating to microservices is no different. For instance, personnel might be resistant to changes associated with microservices. 


Elevating Employee Experience: Transforming Recognition with AI

AI’s ability to analyse patterns in behaviour, performance, and preferences enables organisations to offer personalised recognition that resonates with employees. AI-driven platforms provide real-time insights to leaders, ensuring that appreciation is timely, equitable, and free from unconscious biases. ... Burnout remains a critical challenge in today’s workplace, especially as workloads intensify and hybrid models blur work-life boundaries. With 84% of recognised employees being less likely to experience burnout, AI-driven recognition programs offer a proactive approach to employee well-being. Candy pointed out that AI can monitor engagement levels, detect early signs of burnout, and prompt managers to step in with meaningful appreciation. By tracking sentiment analysis, workload patterns, and feedback trends, AI helps HR teams intervene before burnout escalates. “Recognition isn’t just about celebrating big milestones; it’s about appreciating daily efforts that often go unnoticed. AI helps ensure no contribution is left behind, reinforcing a culture of continuous encouragement and support,” remarked Candy Fernandez. Arti Dua expanded on this, explaining that AI can help create customised recognition strategies that align with employees’ stress levels and work patterns, ensuring appreciation is both timely and impactful.


11 surefire ways to fail with AI

“The fastest way to doom an AI initiative? Treat it as a tech project instead of a business transformation,” Pallath says. “AI doesn’t function in isolation — it thrives on human insight, trust, and collaboration.” The assumption that just providing tools will automatically draw users is a costly myth, Pallath says. “It has led to countless failed implementations where AI solutions sit unused, misaligned with actual workflows, or met with skepticism,” he says. ... Without a workforce that embraces AI, “achieving real business impact is challenging,” says Sreekanth Menon, global leader of AI/ML at professional services and solutions firm Genpact. “This necessitates leadership prioritizing a digital-first culture and actively supporting employees through the transition.” To ease employee concerns about AI, leaders should offer comprehensive AI training across departments, Menon says. ... AI isn’t a one-time deployment. “It’s a living system that demands constant monitoring, adaptation, and optimization,” Searce’s Pallath says. “Yet, many organizations treat AI as a plug-and-play tool, only to watch it become obsolete. Without dedicated teams to maintain and refine models, AI quickly loses relevance, accuracy, and business impact.” Market shifts, evolving customer behaviors, and regulatory changes can turn a once-powerful AI tool into a liability, Pallath says.


Now Is the Time to Transform DevOps Security

Traditionally, security was often treated as an afterthought in the software development process, typically placed at the end of the development cycle. This approach worked when development timelines were longer, allowing enough time to tackle security issues. As development speeds have increased, however, this final security phase has become less feasible. Vulnerabilities that arise late in the process now require urgent attention, often resulting in costly and time-intensive fixes. Overlooking security in DevOps can lead to data breaches, reputational damage, and financial loss. Delays increase the likelihood of vulnerabilities being exploited. As a result, companies are rethinking how security should be embedded into their development processes. ... Significant challenges are associated with implementing robust security practices within DevOps workflows. Development teams often resist security automation because they worry it will slow delivery timelines. Meanwhile, security teams get frustrated when developers bypass essential checks in the name of speed. Overcoming these challenges requires more than just new tools and processes. It's critical for organizations to foster genuine collaboration between development and security teams by creating shared goals and metrics. 


AI development pipeline attacks expand CISOs’ software supply chain risk

Malicious software supply chain campaigns are targeting development infrastructure and code used by developers of AI and large language model (LLM) machine learning applications, the study also found. ... Modern software supply chains rely heavily on open-source, third-party, and AI-generated code, introducing risks beyond the control of software development teams. Better controls over the software the industry builds and deploys are required, according to ReversingLabs. “Traditional AppSec tools miss threats like malware injection, dependency tampering, and cryptographic flaws,” said ReversingLabs’ chief trust officer SaÅ¡a Zdjelar. “True security requires deep software analysis, automated risk assessment, and continuous verification across the entire development lifecycle.” ... “Staying on top of vulnerable and malicious third-party code requires a comprehensive toolchain, including software composition analysis (SCA) to identify known vulnerabilities in third-party software components, container scanning to identify vulnerabilities in third-party packages within containers, and malicious package threat intelligence that flags compromised components,” Meyer said.


Data Governance as an Enabler — How BNY Builds Relationships and Upholds Trust in the AI Era

Governance is like bureaucracy. A lot of us grew up seeing it as something we don’t naturally gravitate toward. It’s not something we want more of. But we take a different view, governance is enabling. I’m responsible for data governance at Bank of New York. We operate in a hundred jurisdictions, with regulators and customers around the world. Our most vital equation is the trust we build with the world around us, and governance is what ensures we uphold that trust. Relationships are our top priority. What does that mean in practice? It means understanding what data can be used for, whose data it is, where it should reside, and when it needs to be obfuscated. It means ensuring data security. What happens to data at rest? What about data in motion? How are entitlements managed? It’s about defining a single source of truth, maintaining data quality, and managing data incidents. All of that is governance. ... Our approach follows a hub-and-spoke model. We have a strong central team managing enterprise assets, but we've also appointed divisional data officers in each line of business to oversee local data sets that drive their specific operations. These divisional data officers report to the enterprise data office. However, they also have the autonomy to support their business units in a decentralized manner.

Daily Tech Digest - November 04, 2024

How AI Is Driving Data Center Transformation - Part 3

According to AFCOM's 2024 State of Data Center Report, AI is already having a major influence on data center design and infrastructure. Global hyperscalers and data center service providers are increasing their capacity to support AI workloads. This has a direct impact on power and cooling requirements. In terms of power, the average rack density is expected to rise from 8.5 kW per rack in 2023 to 12 kW per rack by the end of 2024, with 55% of respondents expecting higher rack density in the next 12 to 36 months. As GPUs are fitted into these racks, servers will generate more heat, increasing both power and cooling requirements. The optimal temperature for operating a data center hall is between 21 and 24°C (69.8 - 75.2°F), which means that any increase in rack density must be accompanied by improvements in cooling capabilities. ... The efficiency of a data center is measured by a metric called power usage efficiency, PUE, which is the ratio of the total amount of power used by a data center to the power used by its computing equipment. To be more efficient, data center providers aim to reduce their PUE rating and bring it closer to 1. A way to achieve that is to reduce the power consumed by the cooling units through advanced cooling technologies.


The Intellectual Property Risks of GenAI

Boards and C-suites that have not yet had discussions about the potential risks of GenAI need to start now. “Employees can use and abuse generative AI even when it is not available to them as an official company tool. It can be really tempting for a junior employee to rely on ChatGPT to help them draft formal-sounding emails, generate creative art for a PowerPoint presentation and the like. Similarly, some employees might find it too tempting to use their phone to query a chatbot regarding questions that would otherwise require intense research,” says Banner Witcoff’s Sigmon. “Since such uses don’t necessarily make themselves obvious, you can’t really figure out if, for example, an employee used generative AI to write an email, much less if they provided confidential information when doing so. This means that companies can be exposed to AI-related risk even when, on an official level, they may not have adopted any AI.” ... “As is the case with the use of technology within any large organization, successful implementation involves a careful and specific evaluation of the tech, the context of use, and its wider implications including intellectual property frameworks, regulatory frameworks, trust, ethics and compliance,” says Raeburn in an email interview. 


The 10x Developer vs. AI: Will Tech’s Elite Coder Be Replaced?

We’re seeing AI tools that can smash out complex coding tasks in minutes and take even your best senior devs’ hours. At Cosine, we’ve seen this firsthand with our AI, Genie. Many of the tasks we tested were in the four to six-hour range, and Genie could complete them in four to six minutes. It’s a genuine superhuman thing to be able to solve problems that quickly. But here’s where it gets interesting. This isn’t just about raw output. The real mind-bender is that AI is starting to think like an engineer. It’s not just spitting out code — it’s solving problems. ... Suppose we’re looking slightly more pragmatically at what AI could signal for career progression. In that case, there is a counterargument that junior developers won’t be exposed to the same level of problem-solving or acquire the same skill sets, given the availability of AI. This creates a complete headache for HR. How do you structure career progression when the traditional markers of seniority — years of experience, deep technical knowledge — might not mean as much? I think we’ll see a shift in focus. Companies will probably lean more on whether you fulfilled your sprint objectives and shipped what you wanted on time instead of going deeper. As for the companies themselves? Those who don’t get on board with AI coding tools will get left in the dust.


The 5 gears of employee well-being

Ritika is of view that managing employees’ and organisational expectations requires clear communication from the leadership. “It offers employees a transparent view of the organisation's direction and highlights how their contributions drive Amway's success and growth. Our leadership prioritises transparency, ensuring that employees have a clear understanding of the organisation’s direction and how their individual and collaborative efforts contribute to collective goals. This approach fosters a strong sense of purpose and engagement while aligning with the vision and desired culture of the company.” She further calls for having a robust feedback mechanism that allows employees an opportunity to share their honest feedback on areas that matter the most and the ones that impact them. “We believe in the feedback flywheel, our bi-annual culture and employee engagement survey allow employees an opportunity to share feedback. Each feedback is followed by a cycle of sharing results and action planning.” She further adds that frequent check-in conversations between the upline and team members ensure there is clarity of expectations, our performance management system ensures there are 3 formal check-in conversations that are focused on coaching and development and not ‘judgement’. 


Agentic AI swarms are headed your way

OpenAI launched an experimental framework last month called Swarm. It’s a “lightweight” system for the development of agentic AI swarms, which are networks of autonomous AI agents able to work together to handle complex tasks without human intervention, according to OpenAI. Swarm is not a product. It’s an experimental tool for coordinating or orchestrating networks of AI agents. The framework is open-source under the MIT license, and available on GitHub. ... One way to look at agentic AI swarming technology is that it’s the next powerful phase in the evolution of generative AI (genAI). In fact, Swarm is built on OpenAI’s Chat Completions API, which uses LLMs like GPT-4. The API is designed to facilitate interactive “conversations” with AI models. It allows developers to create chatbots, interactive agents, and other applications that can engage in natural language conversations. Today, developers are creating what you might call one-off AI tools that do one specific task. Agentic AI would enable developers to create a large number of such tools that specialize in different specific tasks, and then enable each tool to dragoon any others into service if the agent decides the task would be better handled by the other kind of tool.


How To Develop Emerging Leaders In Your Organization

Mentorship and coaching are critical for unlocking the leadership potential of emerging talent. By pairing less experienced employees with seasoned leaders, companies provide invaluable hands-on learning experiences beyond formal training programs. These relationships allow future leaders to observe high-level decision-making in action, receive personalized feedback, and cultivate their leadership instincts in real-world scenarios. ... While technical skills are essential, leadership success depends heavily on soft skills like emotional intelligence, communication, and adaptability. These skills help leaders navigate team dynamics, inspire trust, and handle organizational challenges with confidence. Workshops, problem-solving exercises, and leadership programs are effective for developing these abilities. ... Leadership development can’t happen in a vacuum. One of the most effective ways to accelerate growth is through “stretch assignments,” opportunities that push employees beyond their comfort zones by challenging them with responsibilities that test their leadership abilities. These assignments expose future leaders to high-stakes decision-making, cross-functional collaboration, and strategic thinking, all of which prepare them for the demands of more senior roles.


CIOs look to sharpen AI governance despite uncertainties

There is no dearth of AI governance frameworks available from the US government and European Union, as well as top market researchers, but no doubt, as gen AI innovation outpaces formal standards, CIOs will need to enact and hone internal AI governance policies in 2025 — and enlist the entire C-suite in the process to ensure they are not on the hook alone, observers say. ... “Governance is really about listening and learning from each other as we all care about the outcome, but equally as important, howwe get to the outcome itself,” Williams says. “Once you cross that bridge, you can quickly pivot into AI tools and the actual projects themselves, which is much easier to maneuver.” TruStone Financial Credit Union is also grappling with establishing a comprehensive AI governance program as AI innovation booms. “New generative AI platforms and capabilities are emerging every week. When we discover them, we block access until we can thoroughly evaluate the effectiveness of our controls,” says Gary Jeter, EVP and CTO at TruStone, noting, as an example, that he decided to block access to Google’s NotebookLM initially to assess its safety. Like many enterprises, TruStone has deployed a companywide generative AI platform for policies and procedures branded as TruAssist.


Design strategies in the white space ecosystem

AI compute cabinets can weigh up to 4,800 pounds, raising concerns about floor load capacity. Raised floors offer flexibility for cabling, cooling, and power management but may struggle with the weight demands of high-density setups. Slab floors are sturdier but come with their own design and cost challenges, particularly for liquid cooling, which can pose risks if leaks occur. This isn’t just a financial concern – it’s also about safety. “As we integrate various trades and systems into the same space with multiple teams working alongside each other, safety becomes paramount. Proper structural load assessments and seismic bracing, especially in earthquake-prone areas, are essential to ensure the raised floor can handle the weight,” Willis emphasizes. ... As the landscape of high-performance computing continues to grow and evolve, so too do the designs of data center cabinets. These changes are driven by the need for deeper and wider cabinets that can support a greater number of power distribution units (PDUs) and cabling. The emphasis is not just on accommodating equipment, but also on optimizing space and power capacity to avoid the network distance limitations that can arise when cabinets become too wide.


Costly and struggling: the challenges of legacy SIEM solutions

The main problem organizations face with legacy SIEM systems is the massive amount of unstructured data they produce, making it hard to spot signs of advanced threats such as ransomware and advanced persistent threat groups. “These systems were built primarily to detect known threats using signature-based approaches, which are insufficient against today’s sophisticated, constantly evolving attack techniques,” Young says. “Modern threats often employ subtle tactics that require advanced analytics, behavior-based detection, and proactive correlation across multiple data sources — capabilities that many legacy SIEMs lack. In addition, legacy SIEM systems typically don’t support automated threat intelligence feeds, which are crucial for staying ahead of emerging threats, according to Young. “They also lack the ability to integrate with security orchestration, automation, and response tools, which help automate responses and streamline incident management.” Without these modern features, legacy SIEMs often miss important warning signs of attacks and have trouble connecting different threat signals, making organizations more exposed to complex, multi-stage attacks. Mellen says SIEMS are only as good as the work that companies put into them, which is the predominant feedback she’s received over the years from many practitioners.


Why Effective Fraud Prevention Requires Contact Data Quality Technology

From our experience the quality of contact data is essential to the effectiveness of ID processes, influencing everything from end-to-end fraud prevention to delivering simple ID checks; meaning more advanced and costly techniques, like biometrics and liveness authentication, may not be necessary. The verification process becomes more reliable when a customer’s contact information, such as name, address, email and phone number, are accurate. With this data ID verification technology can then confidently cross-reference the provided information against official databases or other authoritative sources, without discrepancies that could lead to false positives or negatives. A growing issue is fraudsters exploiting inaccuracies in contact data to create false identities and manipulate existing ones. By maintaining clean and accurate contact data ID verification systems can more effectively detect suspicious activity and prevent fraud. For example, inconsistencies in a user’s phone or email, or an address linked to multiple identities, could serve as a red flag for additional scrutiny.



Quote for the day:

“Disagree and commit is a really important principle that saves a lot of arguing.” -- Jeff Bezos

Daily Tech Digest - July 04, 2024

Understanding collective defense as a route to better cybersecurity

Organizations invoking collective defense to protect their IT and data assets will usually focus on sharing threat intelligence and coordinating threat response actions to counter malicious threat actors. Success depends on defining and implementing a collaborative cybersecurity strategy where organizations, both internally and externally, work together across industries to defend against targeted cyber threats. ... Putting this into practice requires organizations to commit to coordinating their cybersecurity strategies to identify, mitigate and recover from threats and breaches. This should begin with a process that defines the stakeholders who will participate in the collective defense initiative. These can include anything from private companies and government agencies to non-profits and Information Sharing and Analysis Centers (ISACs), among others. The approach will only work if it is based on mutual trust, so there is an important role for the use of mechanisms such as non-disclosure agreements, clearly defined roles and responsibilities and a commitment to operational transparency. 


Meaningful Ways to Reward Your IT Team and Its Achievements

With technology rapidly advancing, it's more important than ever to invest in personalized IT team skill development and employee well-being programs, which are a win-win for employees and the companies they work for, says Carrie Rasmussen, CIO at human resources software provider Dayforce, in an email interview. ... Synchronize rewards to project workflows, Felker recommends. If it's a particularly difficult time for the team -- tight deadlines, major changes, and other pressing issues -- he suggests scheduling rewards prior to the work's completion to boost motivation. "Having the team get a boost mid-stream on a project is likely to create an additional reservoir of mental energy they can draw from as the project continues," Felker says. ... It's also important to celebrate success whenever possible and to acknowledge that the outcome was the direct result of great teamwork. "Five minutes of recognition from the CEO in a company update or other forum motivates not only the IT team but the rest of the organization to strive for recognition," Nguyen says. He also advises promoting significant team achievements on LinkedIn and other major social platforms. "This will aid recruiting and retention efforts."


Deepfake research is growing and so is investment in companies that fight it

Manipulating human likeness, such as creating deepfake images, video and audio of people, has become the most common tactic for misusing generative AI, a new study from Google reveals. The most common reason to misuse the technology is to influence public opinion – including swaying political opinion – but it is also finding its way in scams, frauds or other means of generating profit. ... Impersonations of celebrities or public figures, for instance, are often used in investment scams while AI-generated media can also be generated to bypass identity verification and conduct blackmail, sextortion and phishing scams. As the primary data is media reports, the researchers warn that the perception of AI-generated misuse may be skewed to the ones that attract headlines. But despite concerns that sophisticated or state-sponsored actors will use generative AI, many of the cases of misuse were found to rely on popular tools that require minimal technical skills. ... With the threat of deepfakes becoming widespread, some companies are coming up with novel solutions that protect images online.


Building Finance Apps: Best Practices and Unique Challenges

By making compliance a central focus from day one of the development process, you maximize your ability to meet compliance needs, while also avoiding the inefficient process of retrofitting compliance features into the app later. For example, implementing transaction reporting after the rest of the app has been built is likely to be a much heavier lift than designing the app from the start to support that feature. ... The tech stack (meaning the set of frameworks and tools you use to build and run your app) can have major implications for how easy it is to build the app, how secure and reliable it is, and how well it integrates with other systems or platforms. For that reason, you'll want to consider your stack carefully, and avoid the temptation to go with whichever frameworks or tools you know best or like the most. ... Given the plethora of finance apps available today, it can be tempting to want to build fancy interfaces or extravagant features in a bid to set your app apart. In general, however, it's better to adopt a minimalist approach. Build the features your users actually want — no more, no less. Otherwise, you waste time and development resources, while also potentially exposing your app to more security risks.


OVHcloud blames record-breaking DDoS attack on MikroTik botnet

Earlier this year, OVHcloud had to mitigate a massive packet rate attack that reached 840 Mpps, surpassing the previous record holder, an 809 Mpps DDoS attack targeting a European bank, which Akamai mitigated in June 2020. ... OVHcloud says many of the high packet rate attacks it recorded, including the record-breaking attack from April, originate from compromised MirkoTik Cloud Core Router (CCR) devices designed for high-performance networking. The firm identified, specifically, compromised models CCR1036-8G-2S+ and CCR1072-1G-8S+, which are used as small—to medium-sized network cores. Many of these devices exposed their interface online, running outdated firmware and making them susceptible to attacks leveraging exploits for known vulnerabilities. The cloud firm hypothesizes that attackers might use MikroTik's RouterOS's "Bandwidth Test" feature, designed for network throughput stress testing, to generate high packet rates. OVHcloud found nearly 100,000 Mikrotik devices that are reachable/exploitable over the internet, making up for many potential targets for DDoS actors.


Set Goals and Measure Progress for Effective AI Deployment

Combining human expertise and AI capabilities to augment decision-making is an essential tenet in responsible AI principles. The current age of AI adoption should be considered a “coming together of humans and technology.” Humans will continue to be the custodians and stewards of data, which ties into Key Factor 2 about the need for high-quality data, as humans can help curate the relevant data sets to train an LLM. This is critical, and the “human-in-the-loop” facet should be embedded in all AI implementations to avoid completely autonomous implementations. Apart from data curation, this allows humans to take more meaningful actions when equipped with relevant insights, thus achieving better business outcomes. ... Addressing bias, privacy, and transparency in AI development and deployment is the pivotal metric in measuring its success. Like any technology, laying out guardrails and rules of engagement are core to this factor. Enterprises such as Accenture implement measures to detect and prevent bias in their AI recruitment tools, helping to ensure fair hiring practices. 


Site Reliability Engineering State of the Union for 2024

Automation remains at the core of SRE, with tools for container orchestration and infrastructure management playing a critical role. The adoption of containerization technologies such as Docker and Kubernetes has facilitated more efficient deployment and scaling of applications. In 2024, we can expect further advancements in automation tools that streamline the orchestration of complex microservices architectures, thereby reducing the operational burden on SRE teams. Infrastructure automation and orchestration are pivotal in the realm of SRE, enabling teams to manage complex systems with enhanced efficiency and reliability. The evolution of these technologies, particularly with the advent of containerization and microservices, has significantly transformed how applications are deployed, managed and scaled. ... With the increasing prevalence of cyberthreats and the tightening of regulatory requirements, security and compliance have become integral aspects of SRE. Automated tools for compliance monitoring and enforcement will become indispensable, enabling organizations to adhere to industry standards while minimizing the risk of data breaches and other security incidents.


5 Steps to Refocus Your Digital Transformation Strategy for Strategic Advancement

A strategy built around customer value provides measurable outcomes and drives deeper engagement and loyalty. The digital landscape is riddled with risks and opportunities due to rapid technological advancements, especially in data-centric AI. Businesses must stay agile, continually evaluating the risks and rewards of new technologies while maintaining a sharp focus on how these enhancements serve their customer base. ... Organizations with a customer advisory board should leverage it to gain insights directly from those who use their services or products. Engaging customers from the early stages of planning ensures that their feedback and needs directly influence the transformation strategy, leading to more accurate and beneficial implementations. ... One significant mistake IT leaders make is prioritizing technology over customer needs. While technology is a crucial enabler, it should not dictate the strategy. Instead, it should support and enhance the strategy’s core aim — serving the customer. IT leaders must ensure that digital initiatives align with broader business objectives and directly contribute to customer satisfaction and business efficiency.


OpenSSH Vulnerability “regreSSHion” Grants RCE Access Without User Interaction, Most Dangerous Bug in Two Decades

The good news about the OpenSSH vulnerability is that exploitation attempts have not yet been spotted in the wild. Successfully taking advantage of the exploit required about 10,000 tries to win a race condition using 100 concurrent connections under the researcher’s test conditions, or about six to eight hours to RCE due to obfuscation of ASLR glibc’s address. The attack will thus likely be limited to those wielding botnets when it is uncovered by threat actors. Given the large amount of simultaneous connections needed to induce the race condition, the RCE is also very open to being detected and blocked by firewalls and networking monitoring tools. Qualys’ immediate advice for mitigation also includes updating network-based access controls and segmenting networks where possible. ... “While there is currently no proof of concept demonstrating this vulnerability, and it has only been shown to be exploitable under controlled lab conditions, it is plausible that a public exploit for this vulnerability could emerge in the near future. Hence it’s strongly advised to patch this vulnerability before this becomes the case”.


New paper: AI agents that matter

So are AI agents all hype? It’s too early to tell. We think there are research challenges to be solved before we can expect agents such as the ones above to work well enough to be widely adopted. The only way to find out is through more research, so we do think research on AI agents is worthwhile. One major research challenge is reliability — LLMs are already capable enough to do many tasks that people want an assistant to handle, but not reliable enough that they can be successful products. To appreciate why, think of a flight-booking agent that needs to make dozens of calls to LLMs. If each of those went wrong independently with a probability of, say, just 2%, the overall system would be so unreliable as to be completely useless (this partly explains some of the product failures we’ve seen). ... Right now, however, research is itself contributing to hype and overoptimism because evaluation practices are not rigorous enough, much like the early days of machine learning research before the common task method took hold. That brings us to our paper.



Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown

Daily Tech Digest - April 16, 2024

How to Build a Successful AI Strategy for Your Business in 2024

With a solid understanding of AI technology and your organization’s priorities, the next step is to define clear objectives and goals for your AI strategy. Focus on identifying the problems that AI can solve most effectively within your organization. These objectives should be specific, measurable, achievable, relevant, and time-bound (SMART). ... By setting well-defined objectives, you can create a targeted AI strategy that delivers tangible results and aligns with your overall business priorities. An AI implementation strategy often requires specialized expertise and tools that may not be available in-house. To bridge this gap, identify potential partners and vendors who can provide the necessary support for your AI strategy.Start by researching AI and machine learning companies that have a proven track record of working in your industry. When evaluating potential partners, consider factors such as their technical capabilities, the quality of their tools and platforms, and their ability to scale as your AI needs grow. Look for vendors who offer comprehensive solutions that cover the entire AI lifecycle, from data preparation and model development to deployment and monitoring.


Internet can achieve quantum speed with light saved as sound

When transferring information between two quantum computers over a distance—or among many in a quantum internet—the signal will quickly be drowned out by noise. The amount of noise in a fiber-optic cable increases exponentially the longer the cable is. Eventually, data can no longer be decoded. The classical Internet and other major computer networks solve this noise problem by amplifying signals in small stations along transmission routes. But for quantum computers to apply an analogous method, they must first translate the data into ordinary binary number systems, such as those used by an ordinary computer. This won't do. Doing so would slow the network and make it vulnerable to cyberattacks, as the odds of classical data protection being effective in a quantum computer future are very bad. "Instead, we hope that the quantum drum will be able to assume this task. It has shown great promise as it is incredibly well-suited for receiving and resending signals from a quantum computer. So, the goal is to extend the connection between quantum computers through stations where quantum drums receive and retransmit signals, and in so doing, avoid noise while keeping data in a quantum state," says Kristensen.


Better application networking and security with CAKES

A major challenge in enterprises today is keeping up with the networking needs of modern architectures while also keeping existing technology investments running smoothly. Large organizations have multiple IT teams responsible for these needs, but at times, the information sharing and communication between these teams is less than ideal. Those responsible for connectivity, security, and compliance typically live across networking operations, information security, platform/cloud infrastructure, and/or API management. These teams often make decisions in silos, which causes duplication and integration friction with other parts of the organization. Oftentimes, “integration” between these teams is through ticketing systems. ... Technology alone won’t solve some of the organizational challenges discussed above. More recently, the practices that have formed around platform engineering appear to give us a path forward. Organizations that invest in platform engineering teams to automate and abstract away the complexity around networking, security, and compliance enable their application teams to go faster.


AI set to enhance cybersecurity roles, not replace them

Ready or not, though, AI is coming. That being the case, I’d caution companies, regardless of where they are on their AI journey, to understand that they will encounter challenges, whether from integrating this technology into current processes or ensuring that staff are properly trained in using this revolutionary technology, and that’s to be expected. As a cloud security community, we will all be learning together how we can best use this technology to further cybersecurity. ... First, companies need to treat AI with the same consideration as they would a person in a given position, emphasizing best practices. They will also need to determine the AI’s function — if it merely supplies supporting data in customer chats, then the risk is minimal. But if it integrates and performs operations with access to internal and customer data, it’s imperative that they prioritize strict access control and separate roles. ... We’ve been talking about a skills gap in the security industry for years now and AI will deepen that in the immediate future. We’re at the beginning stages of learning, and understandably, training hasn’t caught up yet.


Why employee recognition doesn't work: The dark side of boosting team morale

Despite the importance of appreciation, many workplaces prioritise performance-based recognition, inadvertently overlooking the profound impact of genuine appreciation. This preference for recognition over appreciation can lead to detrimental outcomes, including conditionality and scarcity. Conditionality in recognition arises from its link to past achievements and performance outcomes. Employees often feel pressured to outperform their peers and surpass their past accomplishments to receive recognition, fostering a hypercompetitive work environment that undermines collaboration and teamwork. Furthermore, the scarcity of recognition exacerbates this issue, as tangible rewards such as bonuses or promotions are limited. In this competitive landscape, employees may feel undervalued, leading to disengagement and disillusionment. To foster an inclusive and supportive workplace culture, organisations must recognise the intrinsic value of appreciation alongside performance-based recognition. Embracing appreciation cultivates a culture of gratitude, empathy, and mutual respect, strengthening interpersonal connections and boosting employee morale.


Improving decision-making in LLMs: Two contemporary approaches

Training LLMs in context-appropriate decision-making demands a delicate touch. Currently, two sophisticated approaches posited by contemporary academic machine learning research suggest alternate ways of enhancing the decision-making process of LLMs to parallel those of humans. The first, AutoGPT, uses a self-reflexive mechanism to plan and validate the output; the second, Tree of Thoughts (ToT), encourages effective decision-making by disrupting traditional, sequential reasoning. AutoGPT represents a cutting-edge approach in AI development, designed to autonomously create, assess and enhance its models to achieve specific objectives. Academics have since improved the AutoGPT system by incorporating an “additional opinions” strategy involving the integration of expert models. This presents a novel integration framework that harnesses expert models, such as analyses from different financial models, and presents it to the LLM during the decision-making process. In a nutshell, the strategy revolves around increasing the model’s information base using relevant information. 


Unpacking the Executive Order on Data Privacy: A Deeper Dive for Industry Professionals

For privacy professionals, the order underscores the ongoing challenge of protecting sensitive information against increasingly sophisticated threats. That’s important, and shouldn’t be overlooked. Yet the White House has admitted that this order isn’t a silver bullet for all the nation’s data privacy challenges. That candor is striking. It echoes a sentiment familiar to many of us in the industry: the complexities of protecting personal information in the digital age cannot be fully addressed through singular measures against external threats. Instead, this task requires a long-term, thoughtful, multi-faceted approach – one that also confronts the internal challenges to data privacy posed by Big Tech, domestic data brokers, and foreign governments that exist outside of the designated “countries of concern” category. ... The extensive collection, usage, and sale of personal data by domestic entities—including but not limited to Big Tech companies, data brokers, and third-party vendors—poses significant risks. These practices often lack transparency and accountability, fueling privacy breaches, identity theft, and eroding public trust and individual autonomy.


10 tips to keep IP safe

CSOs who have been protecting IP for years recommend doing a risk and cost-benefit analysis. Make a map of your company’s assets and determine what information, if lost, would hurt your company the most. Then consider which of those assets are most at risk of being stolen. Putting those two factors together should help you figure out where to best spend your protective efforts (and money). If information is confidential to your company, put a banner or label on it that says so. If your company data is proprietary, put a note to that effect on every log-in screen. This seems trivial, but if you wind up in court trying to prove someone took information they weren’t authorized to take, your argument won’t stand up if you can’t demonstrate that you made it clear that the information was protected. ... Awareness training can be effective for plugging and preventing IP leaks, but only if it’s targeted to the information that a specific group of employees needs to guard. When you talk in specific terms about something that engineers or scientists have invested a lot of time in, they’re very attentive. As is often the case, humans are often the weakest link in the defensive chain. 


Types of Data Integrity

Here are a few data integrity issues and risks many organizations face: Compromised hardware: Power outages, fire sprinklers, or a clumsy person knocking a computer to the floor are examples of situations that can cause the loss of vital data or its corruption. Security considers compromised hardware to be hardware that has been hacked. Cyber threats: Cyber security attacks – phishing attacks, malware – present a serious threat to data integrity. Malicious software can corrupt or alter critical data within a database. Additionally, hackers gaining unauthorized access can manipulate or delete data. If changes are made as a result of unauthorized access, it may be a failure in data security. ... Human error: A significant source of data integrity problems is human error. Mistakes that are made during manual entries can produce inaccurate or inconsistent data that then gets stored in the database. Data transfer errors: During the transfer of data, data integrity can be compromised. Transfer errors can damage data integrity, especially when moving massive amounts of data during extract, transform, and load processes, or when moving the organization’s data to a different database system.


Sisense Breach Highlights Rise in Major Supply Chain Attacks

Many of the details of the attack are not yet clear, but the breach may have exposed hundreds of Sisense's prominent customers to a supply chain attack that gave hackers a backdoor into the company's customer networks, a CISA official told Information Security Media Group. Experts said the attack suggests trusted companies are still failing to implement proactive defensive measures to spot supply chain attacks - such as robust access controls, real-time threat intelligence and regular security assessments - at a time when organizations are increasingly reliant on interconnected ecosystems. "These types of software supply chain attacks are only possible through compromised developer credentials and account information from an employee or contractor," said Jim Routh, chief trust officer for the software security company Saviynt. The breach highlights the need for enterprises to improve their identity access management capabilities for cloud-based services and other third parties, he said. Security intelligence platform Censys published insights into the Sisense breach Friday.



Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer

Daily Tech Digest - September 22, 2023

HR Leaders’ strategies for elevating employee engagement in global organisations

In the age of AI, HR technologies have emerged as powerful tools for enhancing employee engagement by streamlining HR processes, improving communication, and personalising the employee experience. Sreedhara added “By embracing HR Tech, we can enhance the employee experience by reducing administrative burdens, improving access to information, and enabling employees to focus on more meaningful aspects of their work. Moreover, these technologies can contribute to greater employee engagement. Enhancing employee experience via HR tech and tools can improve efficiency, and empower employees to take more control of their work-related tasks. We have also enabled some self-service technologies like: Employee portal that serves all HR-related tasks, and access to policies and processes across the employee life cycle - Onboarding, performance management, benefits enrolment, and expense management;  Employee feedback and surveys; Databank for predictive analysis (early warning systems) and manage employee engagement.”


Bolstering enterprise LLMs with machine learning operations foundations

Risk mitigation is paramount throughout the entire lifecycle of the model. Observability, logging, and tracing are core components of MLOps processes, which help monitor models for accuracy, performance, data quality, and drift after their release. This is critical for LLMs too, but there are additional infrastructure layers to consider. LLMs can “hallucinate,” where they occasionally output false knowledge. Organizations need proper guardrails—controls that enforce a specific format or policy—to ensure LLMs in production return acceptable responses. Traditional ML models rely on quantitative, statistical approaches to apply root cause analyses to model inaccuracy and drift in production. With LLMs, this is more subjective: it may involve running a qualitative scoring of the LLM’s outputs, then running it against an API with pre-set guardrails to ensure an acceptable answer. Governance of enterprise LLMs will be both an art and science, and many organizations are still understanding how to codify them into actionable risk thresholds. 


Reimagining Application Development with AI: A New Paradigm

AI-assisted pair programming is a collaborative coding approach where an AI system — like GitHub Copilot or TestPilot — assists developers during coding. It’s an increasingly common approach that significantly impacts developer productivity. In fact, GitHub Copilot is now behind an average of 46 percent of developers’ code and users are seeing 55 percent faster task completion on average. For new software developers, or those interested in learning new skills, AI-assisted pair programming are training wheels for coding. With the benefits of code snippet suggestions, developers can avoid struggling with beginner pitfalls like language syntax. Tools like ChatGPT can act as a personal, on-demand tutor — answering questions, generating code samples, and explaining complex code syntax and logic. These tools dramatically speed the learning process and help developers gain confidence in their coding abilities. Building applications with AI tools hastens development and provides more robust code. 


Don't Let AI Frenzy Lead to Overlooking Security Risks

"Everybody is talking about prompt injection or backporting models because it is so cool and hot. But most people are still struggling with the basics when it comes to security, and these basics continue to be wrong," said John Stone - whose title at Google Cloud is "chaos coordinator" - while speaking at Information Security Media Group's London Cybersecurity Summit. Successful AI implementation requires a secure foundation, meaning that firms should focus on remediating vulnerabilities in the supply chain, source code, and larger IT infrastructure, Stone said. "There are always new things to think about. But the older security risks are still going to happen. You still have infrastructure. You still have your software supply chain and source code to think about." Andy Chakraborty, head of technology platforms at Santander U.K., told the audience that highly regulated sectors such as banking and finance must especially exercise caution when deploying AI solutions that are trained on public data sets.


The second coming of Microsoft's do-it-all laptop is more functional than ever

Microsoft's Surface Laptop Studio 2 is really unlike any other laptop on the market right now. The screen is held up by a tiltable hinge that lets it switch from what I'll call "regular laptop mode" to stage mode (the display is angled like the image above) to studio mode (the display is laid flat, screen-side up, like a tablet). The closest thing I can think of is, well, the previous Laptop Studio model, which fields the same shape-shifting form factor. But after today, if you're the customer for Microsoft's screen-tilting Surface device, then your eyes will be all over the latest model, not the old. That's a good deal, because, unlike the predecessor, the new Surface Laptop Studio 2 features an improved 13th Gen Intel Core H-class processor, NVIDIA's latest RTX 4050/4060 GPUs, and an Intel NPU on Windows for video calling optimizations (which never hurts to have). Every Microsoft expert on the demo floor made it clear to me that gaming and content creation workflows are still the focus of the Studio laptop, so the changes under the hood make sense.


Why more security doesn’t mean more effective compliance

Worse, the more tools there are to manage, the harder it might be to prove compliance with an evolving patchwork of global cybersecurity rules and regulations. That’s especially true of legislation like DORA, which focuses less on prescriptive technology controls and more on providing evidence of why policies were put in place, how they’re evolving, and how organizations can prove they’re delivering the intended outcomes. In fact, it explicitly states that security and IT tools must be continuously monitored and controlled to minimize risk. This is a challenge when organizations rely on manual evidence gathering. Panaseer research reveals that while 82% are confident they’re able to meet compliance deadlines, 49% mostly or solely rely on manual, point-in-time audits. This simply isn’t sustainable for IT teams, given the number of security controls they must manage, the volume of data they generate, and continuous, risk-based compliance requirements. They need a more automated way to continuously measure and evidence KPIs and metrics across all security controls.


EU Chips Act comes into force to ensure supply chain resilience

“With the entry into force today of the European Chips Act, Europe takes a decisive step forward in determining its own destiny. Investment is already happening, coupled with considerable public funding and a robust regulatory framework,” said Thierry Breton, commissioner for Internal Market, in comments posted alongside the announcement. “We are becoming an industrial powerhouse in the markets of the future — capable of supplying ourselves and the world with both mature and advanced semiconductors. Semiconductors that are essential building blocks of the technologies that will shape our future, our industry, and our defense base,” he said. The European Union’s Chips Act is not the only government-backed plan aimed at shoring up domestic chip manufacturing in the wake of the supply chain crisis that has plagued the semiconductor industry in recent years. In the past year, the US, UK, Chinese, Taiwanese, South Korean, and Japanese governments have all announced similar plans.


Microsoft Copilot Brings AI to Windows 11, Works Across Multiple Apps and Your Phone

With Copilot, it's possible to ask the AI to write a summary of a book in the middle of a Word document, or to select an image and have the AI remove the background. In one example, Microsoft showed a long email and demonstrated that when you highlight the text, Copilot appears so you can ask it questions related to the email. And that information can be cross-referenced to information found online, such as asking Copilot for lunch spots nearby based on the email's content. Copilot will be available on the Windows 11 desktop taskbar, making it instantly available at one click. Microsoft says that whether you're using Word, PowerPoint or Edge, you can call on Copilot to assist you with various tasks. It can also be called on via voice. Copilot can connect to your phone, so, for example, you can ask it when your next flight is and it'll look through your text messages and find the necessary information. Edge, Microsoft's web browser, will also have Copilot integrations. 


What Are the Biggest Lessons from the MGM Ransomware Attack?

Ransomware groups increasingly focus on branding and reputation, according to Ferhat Dikbiyik, head of research at third-party risk management software company Black Kite. “When ransomware first made its appearance, the attacks were relatively unsophisticated. Over the years, we have observed a marked elevation in their capabilities and tactics,” he tells InformationWeek in a phone interview. ... The group also called out: “The rumors about teenagers from the US and UK breaking into this organization are still just that -- rumors. We are waiting for these ostensibly respected cybersecurity firms who continue to make this claim to start providing solid evidence to support it.” Dikbiyik also notes that ransomware groups’ more nuanced selection of targets is an indication of increased professionalism. “These groups are doing their homework. They have resources. They acquire intelligence tools…they try to learn their targets,” he says. While ransomware is lucrative, money isn’t the only goal. Selecting high-profile targets, such as MGM, helps these groups to build a reputation, according to Dikbiyik.


A Dimensional Modeling Primer with Mark Peco

“Dimensional models are made up of two elements: facts and dimensions,” he explained. “A fact quantifies a property (e.g., a process cost or efficiency score) and is a measurement that can be captured at a point in time. It’s essentially just a number. A dimension provides the context for that number (e.g., when it was measured, who was the customer, what was the product).” It’s through combining facts and dimensions that we create information that can be used to answer business questions, especially those that relate to process improvement or business performance, Peco said. Peco went on to say that one of the biggest challenges he sees with companies using dimensional models is with integrating the potentially huge number of models into one coherent picture of the business. “A company has many, many processes,” he said, “and each requires its own dimensional model, so there has to be some way of joining these models together to give a complete picture of the organization.” 



Quote for the day:

"Things work out best for those who make the best of how things work out." -- John Wooden