Showing posts with label GreenOps. Show all posts
Showing posts with label GreenOps. Show all posts

Daily Tech Digest - September 29, 2025


Quote for the day:

"Remember that stress doesn't come from what is going on in your life. It comes from your thoughts on what is going on in your life." -- Andrew Bernstein



Agentic AI in IT security: Where expectations meet reality

The first decision regarding AI agents is whether to layer them onto existing platforms or to implement standalone frameworks. The add-on model treats agents as extensions to security information and event management (SIEM), security orchestration, automation and response (SOAR), or other security tools, providing quick wins with minimal disruption. Standalone frameworks, by contrast, act as independent orchestration layers, offering more flexibility but also requiring heavier governance, integration, and change management. ... Agentic AI adoption rarely happens overnight. As Checkpoint’s Weigman puts it, “Most security teams aren’t swapping out their whole SOC for some shiny new AI system, and one can understand that: It’s expensive, and it demands time and human effort, which at the end of the day could appear be too disruptive and costly.” Instead, leaders look for ways to incrementally layer new capabilities without jeopardizing ongoing operations, which makes pilots a common first step. ... “An agent designed to carry out a sequence of actions in response to a threat could inadvertently create new risks if misused or deployed inappropriately,” says Goje. “For instance, there’s potential for unregulated scripts or newly discovered vulnerabilities.” ... “Pricing remains a friction point,” says Fifthelement.ai’s Garini. “Vendors are playing with usage-based models, but organizations are finding value when they tie spend to analyst hours saved rather than raw compute or API calls.”


Anthropic, surveillance and the next frontier of AI privacy

Democratic legal systems are built on due process: Law enforcement must have grounds to investigate. Surveillance is meant to be targeted, not generalized. Allowing AI to conduct mass, speculative profiling would invert that principle, treating everyone as a potential suspect and granting AI the power to decide who deserves scrutiny. By saying “no” to this use case, Anthropic has drawn a red line. It is asserting that there are domains where the risk of harm to civil liberties outweighs the potential utility. ... How much should technology companies be able to control how their products are used, particularly once they are sold into government? Better yet, do they have a responsibility to ensure their products are used as intended? There is no easy answer. Enforcement of “terms of service” in highly sensitive contexts is notoriously difficult. A government agency may purchase access to an AI model and then apply it in ways that the provider cannot see or audit. ... The real challenge ahead is to establish publicly accountable frameworks that balance security needs with fundamental rights. Surveillance powered by AI will be more powerful, more scalable and more invisible than anything that came before. It has enormous potential when it comes to national security use cases. Yet without clear limits, it threatens to normalize perpetual, automated suspicion.


How attackers poison AI tools and defenses

AI systems that act with a high degree of autonomy carry another risk: impersonating users or trusting impostors. One tactic is known as a “Confused Deputy” attack. Here, an AI agent with high privileges performs a task on behalf of a low-privileged attacker. Another involves spoofed API access, where attackers trick integrations with services like Microsoft 365 or Gmail into leaking information or sending fraudulent emails. ... One crucial step is to make filters aware of how LLMs generate content, so they can flag anomalies in tone, behavior or intent that might slip past older systems. Another is to validate what AI systems remember over time. Without that check, poisoned data can linger in memory and influence future decisions. Isolation also matters. AI assistants should run in contained environments where unverified actions are blocked before they can cause damage. Identity management needs to follow the principle of least privilege, giving AI integrations only the access they require. Finally, treat every instruction with skepticism. Even routine requests must be verified before execution if zero-trust principles are to hold. ... The next wave of threats will involve agentic AI-powered systems that reason, plan and act on their own. While these tools can deliver tremendous productivity gains to users, their autonomy makes them attractive targets. If attackers succeed in steering an agent, the system could make decisions, launch actions or move data undetected.


‘AI and ML the main focus in tech right now’

AI and machine learning are undoubtedly the main focuses in technology right now, with mentions everywhere. A great way to upskill in this area is by attending talks and seminars, which are frequently held and provide valuable insights into how these technologies are being applied in the industry. These events also help you stay up to date on the latest developments. If you have a strong interest in the field, taking an online course, even a free one, can be a great way to grasp the fundamentals, learn the terminology, and understand how to effectively apply these technologies in your current role. Cloud technology is another area that’s here to stay. It’s widely adopted and incredibly versatile. Cloud certifications are highly accessible, with plenty of resources available to help you prepare for the exams and follow the learning paths they offer. ... Being a people person is incredibly beneficial in this field. A significant part of the job involves communication – whether it’s sharing ideas or networking with coworkers in your area. Building these connections can greatly enhance your ability to perform and succeed in your role. Problem-solving is another key aspect of software engineering, and it’s something I’ve always enjoyed. While it can be particularly challenging at times, the sense of accomplishment and reward when your efforts pay off is unmatched.


Better Data Beats Better Models: The Case for Data Quality in ML

Data quality is a broad and abstract concept, but it becomes more measurable when we break it down into different dimensions. Accuracy is the most important and obvious one: If the input data is wrong (e.g., mislabeled transactions in fraud detection models), the model will simply learn incorrect patterns. Completeness is equally important. Without a high degree of coverage for important features, the model will lack context and produce weaker predictions. For example, a recommender system missing key user attributes will fail to provide personalized recommendations. Freshness plays a subtle but powerful role in data quality. Outdated data appears correct, but does not reflect real-world conditions. ... Detecting data quality issues is not just about a single check but rather about continuous monitoring. Statistical distribution checks are the first line of defense, helping detect anomalies or sudden shifts that can indicate broken data pipelines. ... Ignoring data quality can often turn out to be very expensive. Teams spend large amounts of compute to retrain models on flawed data, to observe little to no business impact. Launch timelines get pushed back since teams spend weeks debugging data issues, a time that could have been spent otherwise on feature development. In industries that are regulated, like finance and healthcare, poor data quality can cause compliance violations and increased legal expenses.


DORA 2025: Faster, But Are We Any Better?

The newest DORA report — the “State of AI-Assisted Software Development” — lands at a time when AI is eating everything from code generation to documentation to operations. And just like those early DORA reports reframed speed versus stability, this one is reframing what AI is actually doing to our software delivery pipelines. Spoiler alert: It’s not as simple as “AI makes everything better.” ... Now here’s the counterintuitive part. For the first time, DORA shows AI adoption is linked to higher throughput. That’s right — teams using AI are moving work through the system faster than those who aren’t. But before you pop the champagne, look at the other half of the finding: Instability is still higher in AI-heavy teams. Faster, yes. Safer? Not so much. If you’ve been around the block, this won’t shock you. We saw the same thing in the early days of automation — speed without discipline just meant you hit the wall quicker. ... Another gem buried in the report is the role of value stream management. AI tends to deliver “local optimizations” — an engineer codes faster, a test suite runs quicker — but without VSM, those wins don’t always roll up into business outcomes. With VSM in place, AI-driven productivity gains translate into measurable improvements at the team and product level. That, to me, is vintage DORA. Remember when they proved that culture — psychological safety, autonomy, collaboration — wasn’t just a warm fuzzy HR concept but directly correlated with elite performance? Same here. VSM turns AI from a toy into a force multiplier.


The 5 Technology Trends For 2026 Everyone Must Prepare For Now

In recent years, we've seen industry, governments, education and everyday folk scrambling to adapt to the disruptive impact of AI. But by 2026, we're starting to get answers to some of the big questions around its effect on jobs, business and day-to-day life. Now, the focus shifts from simply reacting to reinventing and reshaping in order to find our place in this brave, different and sometimes frightening new world. ... In tech, agents were undoubtedly the hot buzzword of 2025, representing a meaningful evolution over previous AI applications like chatbots and generative AI. Rather than simply answering questions and generating content, agents take action on our behalf, and in 2026, this will become an increasingly frequent and normal occurrence in everyday life. From automating business decision-making to managing and coordinating hectic family schedules, AI agents will handle the “busy work” involved in planning and problem-solving, freeing us up to focus on the big picture or simply slowing down and enjoying life. ... Quantum computing harnesses the strange and seemingly counterintuitive behavior of particles at the sub-atomic level to accomplish many complex computing tasks millions of times faster than "classic" computers. For the last decade, there's been excitement and hype over their performance in labs and research environments, but in 2026, we are likely to see further adoption in the real world. 


GreenOps and FinOps: Strategic Convergence in the Cloud Transformation Journey

FinOps, short for “Financial Operations,” is a cultural practice designed to bring financial accountability to the cloud. It blends engineering, finance, and business teams to manage cloud costs collaboratively and transparently. The goal is clear: maximize business value from the cloud by making spending decisions grounded in data and aligned with business objectives. ... GreenOps, on the other hand, is all about sustainability in cloud operations. It’s a discipline that encourages organizations to monitor, manage, and minimize the environmental footprint of their cloud usage. GreenOps revolves around using renewable energy-powered cloud resources, recycling or reusing digital assets, optimizing workloads, and selecting eco-friendly services, all with the aim of reducing carbon emissions and supporting broader sustainability goals. ... In practical terms, GreenOps activities such as deleting unused storage volumes, rightsizing virtual machines, and consolidating workloads not only shrink the carbon footprint but also slash monthly cloud bills. Thus, sustainability efforts act as “passive” cost optimizers—delivering FinOps benefits without explicit financial tracking. ... FinOps and GreenOps aren’t one-off projects but ongoing practices. Regular reviews, “cost and sustainability audits,” and optimization sprints keep teams focused. 


Rethinking AI’s Role in Mental Health with GPT-5

GPT-5 has surfaced critical questions in the AI mental health community: What happens when people treat a general purpose chatbot as a source of care? How should companies be held accountable for the emotional effects of design decisions? What responsibilities do we bear, as a health care ecosystem, in ensuring these tools are developed with clinical guardrails in place? ... OpenAI has since taken steps to restore user confidence by making its personality “warmer and friendlier,” and encouraging breaks during extended sessions. However, it doesn’t change the fact that ChatGPT was built for engagement, not clinical safety. The interface may feel approachable, especially appealing to those looking to process feelings around high-stigma topics – from intrusive thoughts to identity struggles – but without thoughtful design, that comfort can quickly become a trap. ... Designing for engagement alone won’t get us there, and we must design for outcomes rooted in long-term wellbeing. At the same time, we should broaden our scope to include AI systems that shape the care experience, such as reducing the administrative burden on clinicians by streamlining billing, reimbursement, and other time-intensive tasks that contribute to burnout. Achieving this requires a more collaborative infrastructure to help shape what that looks like, and co-create technology with shared expertise from all corners of the industry including AI ethicists, clinicians, engineers, researchers, policymakers and users themselves.


Cybersecurity skills shortage: can upskilling close the talent gap?

According to reports, the global cybersecurity workforce gap exceeded 4 million professionals in 2023, with India alone requiring more than 500,000 skilled experts to meet current demand. This shortage is not merely a hiring challenge; it is a business risk. ... The traditional answer to talent shortages has been to hire more people. But in cybersecurity, where demand far outstrips supply, hiring alone cannot solve the problem. Upskilling training existing employees to meet evolving requirements offers a sustainable solution. Upskilling is not about starting from scratch. It leverages existing talent pools, such as IT administrators, network engineers, or even software developers, and equips them with cybersecurity expertise. ... While technology plays a central role in cybersecurity, the human factor remains the ultimate line of defense. Many high-profile breaches stem not from technical weaknesses but from human errors such as phishing clicks or misconfigured systems. Upskilling programs must therefore go beyond technical mastery to also emphasise behavioral awareness, ethical responsibility, and decision-making under pressure. ... The cybersecurity talent gap is unlikely to vanish overnight. However, the organisations that will thrive are those that view the challenge not as a bottleneck but as an opportunity to reimagine workforce development. Upskilling is the most pragmatic path forward, enabling companies to build resilience, retain talent, and remain competitive in an era of escalating cyber risks.

Daily Tech Digest - February 04, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie


Technology skills gap plagues industries, and upskilling is a moving target

“The deepening threat landscape and rapidly evolving high-momentum technologies like AI are forcing organizations to move with lightning speed to fill specific gaps in their job architectures, and too often they are stumbling,” said David Foote, chief analyst at consultancy Foote Partners. To keep up with the rapidly changing landscape, Gartner suggests that organizations invest in agile learning for tech teams. “In the context of today’s AI-fueled accelerated disruption, many business leaders feel learning is too slow to respond to the volume, variety and velocity of skills needs,” said Chantal Steen, a senior director in Gartner’s HR practice. “Learning and development must become more agile to respond to changes faster and deliver learning more rapidly and more cost effectively.” Studies from staffing firm ManpowerGroup, hiring platform Indeed, and Deloitte consulting show that tech hiring will focus on candidates with flexible skills to meet evolving demands. “Employers know a skilled and adaptable workforce is key to navigating transformation, and many are prioritizing hiring and retaining people with in-demand flexible skills that can flex to where demand sits,” said Jonas Prising, ManpowerGroup chair and CEO.


Mixture of Experts (MoE) Architecture: A Deep Dive & Comparison of Top Open-Source Offerings

The application of MoE to open-source LLMs offers several key advantages. Firstly, it enables the creation of more powerful and sophisticated models without incurring the prohibitive costs associated with training and deploying massive, single-model architectures. Secondly, MoE facilitates the development of more specialized and efficient LLMs, tailored to specific tasks and domains. This specialization can lead to significant improvements in performance, accuracy, and efficiency across a wide range of applications, from natural language translation and code generation to personalized education and healthcare. The open-source nature of MoE-based LLMs promotes collaboration and innovation within the AI community. By making these models accessible to researchers, developers, and businesses, MoE fosters a vibrant ecosystem of experimentation, customization, and shared learning. ... Integrating MoE architecture into open-source LLMs represents a significant step forward in the evolution of artificial intelligence. By combining the power of specialization with the benefits of open-source collaboration, MoE unlocks new possibilities for creating more efficient, powerful, and accessible AI models that can revolutionize various aspects of our lives.


The DeepSeek Disruption and What It Means for CIOs

The emergence of DeepSeek has also revived a long-standing debate about open-source AI versus proprietary AI. Open-source AI is not a silver bullet. CIOs need to address critical risks as open-source AI models, if not secured properly, can be exposed to grave cyberthreats and adversarial attacks. While DeepSeek currently shows extraordinary efficiency, it requires an internal infrastructure, unlike GPT-4, which can seamlessly scale on OpenAI's cloud. Open-source AI models lack support and skills, thereby mandating users to build their own expertise, which could be demanding. "What happened with DeepSeek is actually super bullish. I look at this transition as an opportunity rather than a threat," said Steve Cohen, founder of Point72. ... The regulatory non-compliance adds another challenge as many governments restrict and disallow sensitive enterprise data from being processed by Chinese technologies. A possibility of potential backdoor can't be ruled out and this could open the enterprises to additional risks. CIOs need to conduct extensive security audits before deploying DeepSeek. rganizations can implement safeguards such as on-premises deployment to avoid data exposure. Integrating strict encryption protocols can help the AI interactions remain confidential, and performing rigorous security audits ensure the model's safety before deploying it into business workflows.


Why GreenOps will succeed where FinOps is failing

The cost-control focus fails to engage architects and engineers in rethinking how systems are designed, built and operated for greater efficiency. This lack of engagement results in inertia and minimal progress. For example, the database team we worked with in an organization new to the cloud launched all the AWS RDS database servers from dev through production, incurring a $600K a month cloud bill nine months before the scheduled production launch. The overburdened team was not thinking about optimizing costs, but rather optimizing their own time and getting out of the way of the migration team as quickly as possible. ... GreenOps — formed by merging FinOps, sustainability and DevOps — addresses the limitations of FinOps while integrating sustainability as a core principle. Green computing contributes to GreenOps by emphasizing energy-efficient design, resource optimization and the use of sustainable technologies and platforms. This foundational focus ensures that every system built under GreenOps principles is not only cost-effective but also minimizes its environmental footprint, aligning technological innovation with ecological responsibility. Moreover, we’ve found that providing emissions feedback to architects and engineers is a bigger motivator than cost to inspire them to design more efficient systems and build automation to shut down underutilized resources.


Best Practices for API Rate Limits and Quotas

Unlike short-term rate limits, the goal of quotas is to enforce business terms such as monetizing your APIs and protecting your business from high-cost overruns by customers. They measure customer utilization of your API over longer durations, such as per hour, per day, or per month. Quotas are not designed to prevent a spike from overwhelming your API. Rather, quotas regulate your API’s resources by ensuring a customer stays within their agreed contract terms. ... Even a protection mechanism like rate limiting could have errors. For example, a bad network connection with Redis could cause reading rate limit counters to fail. In such scenarios, it’s important not to artificially reject all requests or lock out users even though your Redis cluster is inaccessible. Your rate-limiting implementation should fail open rather than fail closed, meaning all requests are allowed even though the rate limit implementation is faulting. This also means rate limiting is not a workaround to poor capacity planning, as you should still have sufficient capacity to handle these requests or even design your system to scale accordingly to handle a large influx of new requests. This can be done through auto-scale, timeouts, and automatic trips that enable your API to still function.


Protecting Ultra-Sensitive Health Data: The Challenges

Protecting ultra-sensitive information "is an incredibly confusing and complicated and evolving part of the law," said regulatory attorney Kirk Nahra of the law firm WilmerHale. "HIPAA generally does not distinguish between categories of health information," he said. "There are exceptions - including the recent Dobbs rule - but these are not fundamental in their application, he said. Privacy protections related to abortion procedures are perhaps the most hotly debated type of patient information. For instance, last June - in response to the June 2022 Supreme Court's Dobbs ruling, which overturned the national right to abortion - the Biden administration's U.S. Department of Health and Human Services modified the HIPAA Privacy Rule to add additional safeguards for the access, use and disclosure of reproductive health information. The rule is aimed at protecting women from the use or disclosure of their reproductive health information when it is sought to investigate or impose liability on individuals, healthcare providers or others who seek, obtain, provide or facilitate reproductive healthcare that is lawful under the circumstances in which such healthcare is provided. But that rule is being challenged in federal court by 15 state attorneys general seeking to revoke the regulations.


Evolving threat landscape, rethinking cyber defense, and AI: Opportunties and risk

Businesses are firmly in attackers’ crosshairs. Financially motivated cybercriminals conduct ransomware attacks with record-breaking ransoms being paid by companies seeking to avoid business interruption. Others, including nation-state hackers, infiltrate companies to steal intellectual property and trade secrets to gain commercial advantage over competitors. Further, we regularly see critical infrastructure being targeted by nation-state cyberattacks designed to act as sleeper cells that can be activated in times of heightened tension. Companies are on the back foot. ... As zero trust disrupts obsolete firewall and VPN-based security, legacy vendors are deploying firewalls and VPNs as virtual machines in the cloud and calling it zero trust architecture. This is akin to DVD hardware vendors deploying DVD players in a data center and calling it Netflix! It gives a false sense of security to customers. Organizations need to make sure they are really embracing zero trust architecture, which treats everyone as untrusted and ensures users connect to specific applications or services, rather than a corporate network. ... Unfortunately, the business world’s harnessing of AI for cyber defense has been slow compared to the speed of threat actors harnessing it for attacks. 


Six essential tactics data centers can follow to achieve more sustainable operations

By adjusting energy consumption based on real-time demand, data centers can significantly enhance their operational efficiency. For example, during periods of low activity, power can be conserved by reducing energy use, thus minimizing waste without compromising performance. This includes dynamic power management technologies in switch and router systems, such as shutting down unused line cards or ports and controlling fan speeds to optimize energy use based on current needs. Conversely, during peak demand, operations can be scaled up to meet increased requirements, ensuring consistent and reliable service levels. Doing so not only reduces unnecessary energy expenditure, but also contributes to sustainability efforts by lowering the environmental impact associated with energy-intensive operations. ... Heat generated from data center operations can be captured and repurposed to provide heating for nearby facilities and homes, transforming waste into a valuable resource. This approach promotes a circular energy model, where excess heat is redirected instead of discarded, reducing the environmental impact. Integrating data centers into local energy systems enhances sustainability and offers tangible benefits to surrounding areas and communities whilst addressing broader energy efficiency goals.


The Engineer’s Guide to Controlling Configuration Drift

“Preventing configuration drift is the bedrock for scalable, resilient infrastructure,” comments Mayank Bhola, CTO of LambdaTest, a cloud-based testing platform that provides instant infrastructure. “At scale, even small inconsistencies can snowball into major operational inefficiencies. We encountered these challenges [user-facing impact] as our infrastructure scaled to meet growing demands. Tackling this challenge head-on is not just about maintaining order; it’s about ensuring the very foundation of your technology is reliable. And so, by treating infrastructure as code and automating compliance, we at LambdaTest ensure every server, service, and setting aligns with our growth objectives, no matter how fast we scale. Adopting drift detection and remediation strategies is imperative for maintaining a resilient infrastructure. ... The policies you set at the infrastructure level, such as those for SSH access, add another layer of security to your infrastructure. Ansible allows you to define policies like removing root access, changing the default SSH port, and setting user command permissions. “It’s easy to see who has access and what they can execute,” Kampa remarks. “This ensures resilient infrastructure, keeping things secure and allowing you to track who did what if something goes wrong.”


Strategies for mitigating bias in AI models

The need to address bias in AI models stems from the fundamental principle of fairness. AI systems should treat all individuals equitably, regardless of their background. However, if the training data reflects existing societal biases, the model will likely reproduce and even exaggerate those biases in its outputs. For instance, if a facial recognition system is primarily trained on images of one demographic, it may exhibit lower accuracy rates for other groups, potentially leading to discriminatory outcomes. Similarly, a natural language processing model trained on predominantly Western text may struggle to understand or accurately represent nuances in other languages and cultures. ... Incorporating contextual data is essential for AI systems to provide relevant and culturally appropriate responses. Beyond basic language representation, models should be trained on datasets that capture the history, geography, and social issues of the populations they serve. For instance, an AI system designed for India should include data on local traditions, historical events, legal frameworks, and social challenges specific to the region. This ensures that AI-generated responses are not only accurate but also culturally sensitive and context-aware. Additionally, incorporating diverse media formats such as text, images, and audio from multiple sources enhances the model’s ability to recognise and adapt to varying communication styles.

Daily Tech Digest - September 23, 2023

A CISO’s First 90 Days: The Ultimate Action Plan and Advice

It’s a CISOs responsibility to establish a solid security foundation as rapidly as possible, and there are many mistakes that can be made along the way. This is why the first 90 days are the most important for new CISOs. Without a clear pathway to success in the early months, CISOs can lose confidence in their ability as change agents and put their entire organization at risk of data theft and financial loss. No pressure! Here’s our recommended roadmap for CISOs in the first 90 days of a new role. ... This means they can reduce the feeling of overwhelm and work strategically toward business goals. For a new CISO, it can be challenging trying to locate and classify all the sensitive data across an organization, not to mention ensuring that it’s also safe from a variety of threats. Data protection technology is often focused on perimeters and endpoints, giving internal bad actors the perfect opportunity to slip through any security gaps in files, folders, and devices. For large organizations, it’s practically impossible to audit data activity at scale without a robust DSPM.


There’s No Value in Observability Bloat. Let’s Focus on the Essentials

Telemetry data gathered from the distributed components of modern cloud architectures needs to be centralized and correlated for engineers to gain a complete picture of their environments. Engineers need a solution with critical capabilities such as dashboarding, querying and alerting, and AI-based analysis and response, and they need the operation and management of the solution to be streamlined. What’s important for them to know is that it’s not necessary to spend more to ensure peak performance and visibility as their environmental complexity grows. ... No doubt, more data is being generated, but most of it is not relevant or valuable to an organization. Observability can be optimized to bring greater value to customers, and that’s where the market is headed. Call it “essential observability.” It’s a disruptive vision to propose a re-architected approach to observability, but what engineers need is a new approach making it easier to surface insights from their telemetry data while deprioritizing low-value data. Costs can be reduced by consuming only the data that enables teams to maintain performance and drive smart business decisions.


Shedding Light on Dark Patterns in FinTech: Impact of DPDP Act

In practice, these patterns exploit human psychology and trick people into making unwanted choices/ purchases. It has become a menace for the FinTech industry. These patterns are used to encourage people to sign up for loans, credit cards, and other financial products that they may not need or understand. However, the new Digital Personal Data Protection Act, 2023 (“DPDP Act”), can be used to bring such dark patterns under control. The DPDP Act requires online platforms to seek consent of Data Principals through clear, specific and unambiguous notice before processing any data. Further, the Act empowers individuals to retract/ withdraw consent to any agreement at any juncture.  ... Companies will need to review their user interfaces and remove any dark patterns that they are using and protect the personal data and use the data for ‘legitimate purposes’ only and take consent from users, through clear affirmative action, in unambiguous terms. They will also need to develop new ways to promote their products and services without relying on deception.


Can business trust ChatGPT?

It might seem premature to worry about trust when there is already so much interest in the opportunities Gen AI can offer. However, it needs to be recognized that there’s also an opportunity cost — inaccuracy and misuse could be disastrous in ways organizations can’t easily anticipate. Up until now, digital technology has been traditionally viewed as being trustworthy in the sense that it is seen as being deterministic. Like an Excel formula, it will be executed in the same manner 100% percent of the time, leading to a predictable, consistent outcome. Even when the outcome yields an error — due to implementation issues, changes in the context in which it has been deployed, or even bugs and faults — there is nevertheless a sense that technology should work in a certain way. In the case of Gen AI, however, things are different; even the most optimistic hype acknowledges that it can be unpredictable, and its output is often unexpected. Trust in consistency seems to be less important than excitement at the sheer range of possibilities Gen AI can deliver, seemingly in an instant.


A Few Best Practices for Design and Implementation of Microservices

The first step is to define the microservices architecture. It has to be established how the services will interact with each other before a company attempts to optimise their implementation. Once microservices architecture gets going, we must be able to optimise the increase in speed. It is better to start with a few coarse-grained but self-contained services. Fine graining can happen as the implementation matures over time. The developers, operations team, and testing fraternity may have extensive experience in monoliths, but a microservices-based system is a new reality; hence, they need time to cope with this new shift. Do not discard the monolithic application immediately. Instead, have it co-exist with the new microservices, and iteratively deprecate similar functionalities in the monolithic application. This is not easy and requires a significant investment in people and processes to get started. As with any technology, it is always better to avoid the big bang approach, and identify ways to get the toes wet before diving in head first.


Bridging Silos and Overcoming Collaboration Antipatterns in Multidisciplinary Organisations

Collaboration is at the heart of teamwork. Many modern organisations set up teams to be cross-functional or multidisciplinary. Multidisciplinary teams are made up of specialists from different disciples collaborating together daily towards a shared outcome. They have the roles needed to design, plan, deliver, deploy and iterate a product or service. Modern approaches and frameworks often focus on increasing flow and reducing blockers, and one way to do this is to remove the barrier between functions. However, as organisations grow in size and complexity, they look for different ways of working together, and some of these create collaboration anti-patterns. Three of the most common antipatterns I see and have named here are: One person split across multiple teams; Product vs. engineering wars; and X-led organisations,


The Rise of the Malicious App

Threat actors have changed the playing field with the introduction of malicious apps. These applications add nothing of value to the hub app. They are designed to connect to a SaaS application and perform unauthorized activities with the data contained within. When these apps connect to the core SaaS stack, they request certain scopes and permissions. These permissions then allow the app the ability to read, update, create, and delete content. Malicious applications may be new to the SaaS world, but it's something we've already seen in mobile. Threat actors would create a simple flashlight app, for example, that could be downloaded through the app store. Once downloaded, these minimalistic apps would ask for absurd permission sets and then data-mine the phone. ... Threat actors are using sophisticated phishing attacks to connect malicious applications to core SaaS applications. In some instances, employees are led to a legitimate-looking site, where they have the opportunity to connect an app to their SaaS. In other instances, a typo or slightly misspelled brand name could land an employee on a malicious application's site. 


What Is GreenOps? Putting a Sustainable Focus on FinOps

If the future of cloud sustainability appears bleak, Arora advises looking to examples of other tech advancements and the curve of their development, where early adopters led the way and then the main curve eventually followed. “The same thing happened with electric cars,” Arora points out. “They didn’t enter the mainstream because they were better for the environment; they entered the mainstream because the cost came down.” And this is what he predicts will happen with cloud sustainability. Right now, the early adopters are stepping forward and championing GreenOps as a part of the FinOps equation. In a few years, others will be able to measure their data, analyze how they reduced their carbon impact and what effect it had on cloud spending and savings, and then follow their lead. It’s naive to think that most companies will go out of their way (and perhaps even increase their cloud spending) to reduce their carbon footprint. 


The Growing Importance of AI Governance

As AI systems become more powerful and complex, businesses and regulatory agencies face two formidable obstacles:The complexity of the systems requires rule-making by technologists rather than politicians, bureaucrats, and judges. The thorniest issues in AI governance involve value-based decisions rather than purely technical ones. An approach based on regulatory markets has been proposed that attempts to bridge the divide between government regulators who lack the required technical acumen and technologists in the private sector whose actions may be undemocratic. The technique adopts an outcome-based approach to regulation in place of the traditional reliance on prescriptive command-and-control rules. AI governance under this model would rely on licensed private regulators charged with ensuring AI systems comply with outcomes specified by governments, such as preventing fraudulent transactions and blocking illegal content. The private regulators would also be responsible for the safe use of autonomous vehicles, use of unbiased hiring practices, and identification of organizations that fail to comply with the outcome-based regulations.


Legal Issues for Data Professionals

Lawyers identify risks data professionals may not know they have. Moreover, because data is a new field of law, lawyers need to be innovative in creating legal structures in contracts to allow two or more parties to achieve their goals. For example, there are significant challenges attempting to apply the legal techniques traditionally used with other classes of business assets (such as intellectual property, real property, and corporate physical assets) to data as a business asset class. Because the old legal techniques do not fit well, lawyers and their clients need to develop new ways of handling the business and legal issues that arise, and in so doing, invent new legal structures that meet the specific attributes of data that differentiate data from other business assets. To take one example, using software agreements as a template for data transactions will not always work because the IP rights for software do not align with data, the concept of software deliverables and acceptance testing is not a good fit, and the representations and warranties are both over and underinclusive. 



Quote for the day:

"Rarely have I seen a situation where doing less than the other guy is a good strategy." -- Jimmy Spithill