Showing posts with label start-up. Show all posts
Showing posts with label start-up. Show all posts

Daily Tech Digest - July 20, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


Lean Agents: The Agile Workforce of Agentic AI

Organizations are tired of gold‑plated mega systems that promise everything and deliver chaos. Enter frameworks like AutoGen and LangGraph, alongside protocols such as MCP; all enabling Lean Agents to be spun up on-demand, plug into APIs, execute a defined task, then quietly retire. This is a radical departure from heavyweight models that stay online indefinitely, consuming compute cycles, budget, and attention. ... Lean Agents are purpose-built AI workers; minimal in design, maximally efficient in function. Think of them as stateless or scoped-memory micro-agents: they wake when triggered, perform a discrete task like summarizing an RFP clause or flagging anomalies in payments and then gracefully exit, freeing resources and eliminating runtime drag. Lean Agents are to AI what Lambda functions are to code: ephemeral, single-purpose, and cloud-native. They may hold just enough context to operate reliably but otherwise avoid persistent state that bloats memory and complicates governance. ... From technology standpoint, combined with the emerging Model‑Context Protocol (MCP) give engineering teams the scaffolding to create discoverable, policy‑aware agent meshes. Lean Agents transform AI from a monolithic “brain in the cloud” into an elastic workforce that can be budgeted, secured, and reasoned about like any other microservice.


Cloud Repatriation Is Harder Than You Think

Repatriation is not simply a reverse lift-and-shift process. Workloads that have developed in the cloud often have specific architectural dependencies that are not present in on-premises environments. These dependencies can include managed services like identity providers, autoscaling groups, proprietary storage solutions, and serverless components. As a result, moving a workload back on-premises typically requires substantial refactoring and a thorough risk assessment. Untangling these complex layers is more than just a migration; it represents a structural transformation. If the service expectations are not met, repatriated applications may experience poor performance or even fail completely. ... You cannot migrate what you cannot see. Accurate workload planning relies on complete visibility, which includes not only documented assets but also shadow infrastructure, dynamic service relationships, and internal east-west traffic flows. Static tools such as CMDBs or Visio diagrams often fall out of date quickly and fail to capture real-time behavior. These gaps create blind spots during the repatriation process. Application dependency mapping addresses this issue by illustrating how systems truly interact at both the network and application layers. Without this mapping, teams risk disrupting critical connections that may not be evident on paper.


AI Agents Are Creating a New Security Nightmare for Enterprises and Startups

The agentic AI landscape is still in its nascent stages, making it the opportune moment for engineering leaders to establish robust foundational infrastructure. While the technology is rapidly evolving, the core patterns for governance are familiar: Proxies, gateways, policies, and monitoring. Organizations should begin by gaining visibility into where agents are already running autonomously — chatbots, data summarizers, background jobs — and add basic logging. Even simple logs like “Agent X called API Y” are better than nothing. Routing agent traffic through existing proxies or gateways in a reverse mode can eliminate immediate blind spots. Implementing hard limits on timeouts, max retries, and API budgets can prevent runaway costs. While commercial AI gateway solutions are emerging, such as Lunar.dev, teams can start by repurposing existing tools like Envoy, HAProxy, or simple wrappers around LLM APIs to control and observe traffic. Some teams have built minimal “LLM proxies” in days, adding logging, kill switches, and rate limits. Concurrently, defining organization-wide AI policies — such as restricting access to sensitive data or requiring human review for regulated outputs — is crucial, with these policies enforced through the gateway and developer training.


The Evolution of Software Testing in 2025: A Comprehensive Analysis

The testing community has evolved beyond the conventional shift-left and shift-right approaches to embrace what industry leaders term "shift-smart" testing. This holistic strategy recognizes that quality assurance must be embedded throughout the entire software development lifecycle, from initial design concepts through production monitoring and beyond. While shift-left testing continues to emphasize early validation during development phases, shift-right testing has gained equal prominence through its focus on observability, chaos engineering, and real-time production testing. ... Modern testing platforms now provide insights into how testing outcomes relate to user churn rates, release delays, and net promoter scores, enabling organizations to understand the direct business impact of their quality assurance investments. This data-driven approach transforms testing from a technical activity into a business-critical function with measurable value.Artificial intelligence platforms are revolutionizing test prioritization by predicting where failures are most likely to occur, allowing testing teams to focus their efforts on the highest-risk areas. ... Modern testers are increasingly taking on roles as quality coaches, working collaboratively with development teams to improve test design and ensure comprehensive coverage aligned with product vision. 


7 lessons I learned after switching from Google Drive to a home NAS

One of the first things I realized was that a NAS is only as fast as the network it’s sitting on. Even though my NAS had decent specs, file transfers felt sluggish over Wi-Fi. The new drives weren’t at fault, but my old router was proving to be a bottleneck. Once I wired things up and upgraded my router, the difference was night and day. Large files opened like they were local. So, if you’re expecting killer performance, make sure to look out for the network box, because it perhaps matters just as much  ... There was a random blackout at my place, and until then, I hadn’t hooked my NAS to a power backup system. As a result, the NAS shut off mid-transfer without warning. I couldn’t tell if I had just lost a bunch of files or if the hard drives had been damaged too — and that was a fair bit scary. I couldn’t let this happen again, so I decided to connect the NAS to an uninterruptible power supply unit (UPS).  ... I assumed that once I uploaded my files to Google Drive, they were safe. Google would do the tiring job of syncing, duplicating, and mirroring on some faraway data center. But in a self-hosted environment, you are the one responsible for all that. I had to put safety nets in place for possible instances where a drive fails or the NAS dies. My current strategy involves keeping some archived files on a portable SSD, a few important folders synced to the cloud, and some everyday folders on my laptop set up to sync two-way with my NAS.


5 key questions your developers should be asking about MCP

Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair. ... Remote deployment obviously addresses the scaling but opens up a can of worms around transport complexity. The original HTTP+SSE approach was replaced by a March 2025 streamable HTTP update, which tries to reduce complexity by putting everything through a single /messages endpoint. Even so, this isn’t really needed for most companies that are likely to build MCP servers. But here’s the thing: A few months later, support is spotty at best. Some clients still expect the old HTTP+SSE setup, while others work with the new approach — so, if you’re deploying today, you’re probably going to support both. Protocol detection and dual transport support are a must. ... However, the biggest security consideration with MCP is around tool execution itself. Many tools need broad permissions to be useful, which means sweeping scope design is inevitable. Even without a heavy-handed approach, your MCP server may access sensitive data or perform privileged operations


Firmware Vulnerabilities Continue to Plague Supply Chain

"The major problem is that the device market is highly competitive and the vendors [are] competing not only to the time-to-market, but also for the pricing advantages," Matrosov says. "In many instances, some device manufacturers have considered security as an unnecessary additional expense." The complexity of the supply chain is not the only challenge for the developers of firmware and motherboards, says Martin Smolár, a malware researcher with ESET. The complexity of the code is also a major issue, he says. "Few people realize that UEFI firmware is comparable in size and complexity to operating systems — it literally consists of millions of lines of code," he says. ... One practice that hampers security: Vendors will often try to only distribute security fixes under a non-disclosure agreement, leaving many laptop OEMs unaware of potential vulnerabilities in their code. That's the exact situation that left Gigabyte's motherboards with a vulnerable firmware version. Firmware vendor AMI fixed the issues years ago, but the issues have still not propagated out to all the motherboard OEMs. ... Yet, because firmware is always evolving as better and more modern hardware is integrated into motherboards, the toolset also need to be modernized, Cobalt's Ollmann says.


Beyond Pilots: Reinventing Enterprise Operating Models with AI

Historically, AI models required vast volumes of clean, labeled data, making insights slow and costly. Large language models (LLMs) have upended this model, pre-trained on billions of data points and able to synthesize organizational knowledge, market signals, and past decisions to support complex, high-stakes judgment. AI is becoming a powerful engine for revenue generation through hyper-personalization of products and services, dynamic pricing strategies that react to real-time market conditions, and the creation of entirely new service offerings. More significantly, AI is evolving from completing predefined tasks to actively co-creating superior customer experiences through sophisticated conversational commerce platforms and intelligent virtual agents that understand context, nuance, and intent in ways that dramatically enhance engagement and satisfaction. ... In R&D and product development, AI is revolutionizing operating models by enabling faster go-to-market cycles. AI can simulate countless design alternatives, optimize complex supply chains in real time, and co-develop product features based on deep analysis of customer feedback and market trends. These systems can draw from historical R&D successes and failures across industries, accelerating innovation by applying lessons learned from diverse contexts and domains.


Alternative clouds are on the rise

Alt clouds, in their various forms, represent a departure from the “one size fits all” mentality that initially propelled the public cloud explosion. These alternatives to the Big Three prioritize specificity, specialization, and often offer an advantage through locality, control, or workload focus. Private cloud, epitomized by offerings from VMware and others, has found renewed relevance in a world grappling with escalating cloud bills, data sovereignty requirements, and unpredictable performance from shared infrastructure. The old narrative that “everything will run in the public cloud eventually” is being steadily undermined as organizations rediscover the value of dedicated infrastructure, either on-premises or in hosted environments that behave, in almost every respect, like cloud-native services. ... What begins as cost optimization or risk mitigation can quickly become an administrative burden, soaking up engineering time and escalating management costs. Enterprises embracing heterogeneity have no choice but to invest in architects and engineers who are familiar not only with AWS, Azure, or Google, but also with VMware, CoreWeave, a sovereign European platform, or a local MSP’s dashboard. 


Making security and development co-owners of DevSecOps

In my view, DevSecOps should be structured as a shared responsibility model, with ownership but no silos. Security teams must lead from a governance and risk perspective, defining the strategy, standards, and controls. However, true success happens when development teams take ownership of implementing those controls as part of their normal workflow. In my career, especially while leading security operations across highly regulated industries, including finance, telecom, and energy, I’ve found this dual-ownership model most effective. ... However, automation without context becomes dangerous, especially closer to deployment. I’ve led SOC teams that had to intervene because automated security policies blocked deployments over non-exploitable vulnerabilities in third-party libraries. That’s a classic example where automation caused friction without adding value. So the balance is about maturity: automate where findings are high-confidence and easily fixable, but maintain oversight in phases where risk context matters, like release gates, production changes, or threat hunting. ... Tools are often dropped into pipelines without tuning or context, overwhelming developers with irrelevant findings. The result? Fatigue, resistance, and workarounds.

Daily Tech Digest - April 30, 2025


Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown


Common Pitfalls and New Challenges in IT Automation

“You don’t know what you don’t know and can’t improve what you can’t see. Without process visibility, automation efforts may lead to automating flawed processes. In effect, accelerating problems while wasting both time and resources and leading to diminished goodwill by skeptics,” says Kerry Brown, transformation evangelist at Celonis, a process mining and process intelligence provider. The aim of automating processes is to improve how the business performs. That means drawing a direct line from the automation effort to a well-defined ROI. ... Data is arguably the most boring issue on IT’s plate. That’s because it requires a ton of effort to update, label, manage and store massive amounts of data and the job is never quite done. It may be boring work, but it is essential and can be fatal if left for later. “One of the most significant mistakes CIOs make when approaching automation is underestimating the importance of data quality. Automation tools are designed to process and analyze data at scale, but they rely entirely on the quality of the input data,” says Shuai Guan, co-founder and CEO at Thunderbit, an AI web scraper tool. ... "CIOs often fall into the trap of thinking automation is just about suppressing noise and reducing ticket volumes. While that’s one fairly common use case, automation can offer much more value when done strategically,” says Erik Gaston


Outmaneuvering Tariffs: Navigating Disruption with Data-Driven Resilience

The fact that tariffs are coming was expected – President Donald Trump campaigned promising tariffs – but few could have expected their severity (145% on Chinese imports, as of this writing) and their pace of change (prohibitively high “reciprocal” tariffs on 100+ countries, only to be temporarily rescinded days later). Also unpredictable were second-order effects such as stock and bond market reactions, affecting the cost of capital, and the impact on consumer demand, due to the changing expectations of inflation or concerns of job loss. ... Most organizations will have fragmented views of data, including views of all of the components that come from a given supplier or are delivered through a specific transportation provider. They may have a product-centric view that includes all suppliers that contribute all of the components of a given product. But this data often resides in a variety of supplier-management apps, procurement apps, demand forecasting apps, and other types of apps. Some may be consolidated into a data lakehouse or a cloud data warehouse to enable advanced analytics, but the time required by a data engineering team to build the necessary data pipelines from these systems is often multiple days or weeks, and such pipelines will usually only be implemented for scenarios that the business expects will be stable over time.


The state of intrusions: Stolen credentials and perimeter exploits on the rise, as phishing wanes

What’s worrying is that in over half of intrusions (57%) the victim organizations learned about the compromise of their networks and systems from a third-party rather than discovering them through internal means. In 14% of cases, organizations were notified directly by attackers, usually in the form of ransom notes, but 43% of cases involved external entities such as a cybersecurity company or law enforcement agencies. The average time attackers spent inside a network until being discovered last year was 11 days, a one-day increase over 2023, though still a major improvement versus a decade ago when the average discovery time was 205 days. Attacker dwell time, as Mandiant calls it, has steadily decreased over the years, which is a good sign ... In terms of ransomware, the most common infection vector observed by Mandiant last year were brute-force attacks (26%), such as password spraying and use of common default credentials, followed by stolen credentials and exploits (21% each), prior compromises resulting in sold access (15%), and third-party compromises (10%). Cloud accounts and assets were compromised through phishing (39%), stolen credentials (35%), SIM swapping (6%), and voice phishing (6%). Over two-thirds of cloud compromises resulted in data theft and 38% were financially motivated with data extortion, business email compromise, ransomware, and cryptocurrency fraud being leading goals.


Three Ways AI Can Weaken Your Cybersecurity

“Slopsquatting” is a fresh AI take on “typosquatting,” where ne’er-do-wells spread malware to unsuspecting Web travelers who happen to mistype a URL. With slopsquatting, the bad guys are spreading malware through software development libraries that have been hallucinated by GenAI. ... While it is still unclear whether the bad guys have weaponized slopsquatting yet, GenAI’s tendency to hallucinate software libraries is perfectly clear. Last month, researchers published a paper that concluded that GenAI recommends Python and JavaScript libraries that don’t exist about one-fifth of the time. ... Like the SQL injection attacks that plagued early Web 2.0 warriors who didn’t adequately validate database input fields, prompt injections involve the surreptitious injection of a malicious prompt into a GenAI-enabled application to achieve some goal, ranging from information disclosure and code execution rights. Mitigating these sorts of attacks is difficult because of the nature of GenAI applications. Instead of inspecting code for malicious entities, organizations must investigate the entirery of a model, including all of its weights. ... A form of adversarial AI attacks, data poisoning or data manipulation poses a serious risk to organizations that rely on AI. According to the security firm CrowdStrike, data poisoning is a risk to healthcare, finance, automotive, and HR use cases, and can even potentially be used to create backdoors.


AI Has Moved From Experimentation to Execution in Enterprise IT

According to the SOAS report, 94% of organisations are deploying applications across multiple environments—including public clouds, private clouds, on-premises data centers, edge computing, and colocation facilities—to meet varied scalability, cost, and compliance requirements. Consequently, most decision-makers see hybrid environments as critical to their operational flexibility. 91% cited adaptability to fluctuating business needs as the top benefit of adopting multiple clouds, followed by improved app resiliency (68%) and cost efficiencies (59%). A hybrid approach is also reflected in deployment strategies for AI workloads, with 51% planning to use models across both cloud and on-premises environments for the foreseeable future. Significantly, 79% of organisations recently repatriated at least one application from the public cloud back to an on-premises or co-location environment, citing cost control, security concerns, and predictability. ... “While spreading applications across different environments and cloud providers can bring challenges, the benefits of being cloud-agnostic are too great to ignore. It has never been clearer that the hybrid approach to app deployment is here to stay,” said Cindy Borovick, Director of Market and Competitive Intelligence,


Trying to Scale With a Small Team? Here's How to Drive Growth Without Draining Your Resources

To be an effective entrepreneur or leader, communication is key, and being able to prioritize initiatives that directly align with the overall strategic vision ensures that your lean team is working on projects that have the greatest impact. Integrate key frameworks such as Responsible, Accountable, Consulted, and Informed (RACI) and Objectives and Key Results (OKRs) to maintain transparency, focus and measure progress. By focusing efforts on high-impact activities, your lean team can achieve high success and significant results without the unnecessary strain usually attributable to early-stage organizations. ... Many think that agile methodologies are only for the fast-moving software development industry — but in reality, the frameworks are powerful tools for lean teams in any industry. Encouraging the right culture is key where quick pivots, regular genuine feedback loops and leadership that promotes continuous improvement are part of the everyday workflows. This agile mindset, when adopted early, helps teams rapidly respond to market changes and client issues. ... Trusting others builds rapport. Assigning clear ownership of tasks while allowing those team members the autonomy to execute the strategies creatively and efficiently, while also allowing them to fail, is how trust is created.


Effecting Culture Changes in Product Teams

Depending on the organization, the responsibility of successfully leading a culture shift among the product team could fall to various individuals – the CPO, VP of product development, product manager, etc. But regardless of the specific title, to be an effective leader, you can’t assume you know all the answers. Start by having one-to-one conversations with numerous members on the product/engineering team. Ask for their input and understand, from their perspective, what is working, what’s not working, and what ideas they have for how to accelerate product release timelines. After conducting one-to-one discussions, sit down and correlate the information. Where are the common denominators? Did multiple team members make the same suggestions? Identify the roadblocks that are slowing down the product team or standing in the way of delivering incremental value on a more regular basis. In many cases, tech leaders will find that their team already knows how to fix the issue – they just need permission to do things a bit differently and adjust company policies/procedures to better support a more accelerated timeline. Talking one-on-one with team members also helps resolve any misunderstandings around why the pace of work must change as the company scales and accumulates more customers. Product engineers often have a clear vision of what the end product should entail, and they want to be able to deliver on that vision.


Microsoft Confirms Password Spraying Attack — What You Need To Know

The password spraying attack exploited a command line interface tool called AzureChecker to “download AES-encrypted data that when decrypted reveals the list of password spray targets,” the report said. It then, to add salt to the now open wound, accepted an accounts.txt file containing username and password combinations used for the attack, as input. “The threat actor then used the information from both files and posted the credentials to the target tenants for validation,” Microsoft explained. The successful attack enabled the Storm-1977 hackers to then leverage a guest account in order to create a compromised subscription resource group and, ultimately, more than 200 containers that were used for cryptomining. ... Passwords are no longer enough to keep us safe online. That’s the view of Chris Burton, head of professional services at Pentest People, who told me that “where possible, we should be using passkeys, they’re far more secure, even if adoption is still patchy.” Lorri Janssen-Anessi, director of external cyber assessments at BlueVoyant is no less adamant when it comes to going passwordless. ... And Brian Pontarelli, CEO of FusionAuth, said that the teams who are building the future of passwords are the same ones that are building and managing the login pages of their apps. “Some of them are getting rid of passwords entirely,” Pontarelli said


The secret weapon for transformation? Treating it like a merger

Like an IMO, a transformation office serves as the conductor — setting the tempo, aligning initiatives and resolving portfolio-level tensions before they turn into performance issues. It defines the “music” everyone should be playing: a unified vision for experience, business architecture, technology design and most importantly, change management. It also builds connective tissue. It doesn’t just write the blueprint — it stays close to initiative or project leads to ensure adherence, adapts when necessary and surfaces interdependencies that might otherwise go unnoticed. ... What makes the transformation office truly effective isn’t just the caliber of its domain leaders — it’s the steering committee of cross-functional VPs from core business units and corporate functions that provides strategic direction and enterprise-wide accountability. This group sets the course, breaks ties and ensures that transformation efforts reflect shared priorities rather than siloed agendas. Together, they co-develop and maintain a multi-year roadmap that articulates what capabilities the enterprise needs, when and in what sequence. Crucially, they’re empowered to make decisions that span the legacy seams of the organization — the gray areas where most transformations falter. In this way, the transformation office becomes more than connective tissue; it becomes an engine for enterprise decision-making.


Legacy Modernization: Architecting Real-Time Systems Around a Mainframe

When traffic spikes hit our web portal, those requests would flow through to the mainframe. Unlike cloud systems, mainframes can't elastically scale to handle sudden load increases. This created a bottleneck that could overload the mainframe, causing connection timeouts. As timeouts increased, the mainframe would crash, leading to complete service outages with a large blast radius, hundreds of other applications which depend on the mainframe would also be impacted. This is a perfect example of the problems with synchronous connections to the mainframes. When the mainframes could be overwhelmed by a highly elastic resource like the web, the result could be failure in datastores, and sometimes that failure could result in all consuming applications failing. ... Change Data Capture became the foundation of our new architecture. Instead of batch ETLs running a few times daily, CDC streamed data changes from the mainframes in near real-time. This created what we called a "system-of-reference" - not the authoritative source of truth (the mainframe remains "system-of-record"), but a continuously updated reflection of it. The system of reference is not a proxy of the system of record, which is why our website was still live when the mainframe went down.

Daily Tech Digest - April 29, 2025


Quote for the day:

"Don't let yesterday take up too much of today." -- Will Rogers



AI and Analytics in 2025 — 6 Trends Driving the Future

As AI becomes deeply embedded in enterprise operations and agentic capabilities are unlocked, concerns around data privacy, security and governance will take center stage. With emerging technologies evolving at speed, a mindset of continuous adaptation will be required to ensure requisite data privacy, combat cyber risks and successfully achieve digital resilience. As organizations expand their global footprint, understanding the implications of evolving AI regulations across regions will be crucial. While unifying data is essential for maximizing value, ensuring compliance with diverse regulatory frameworks is mandatory. A nuanced approach to regional regulations will be key for organizations navigating this dynamic landscape. ... As the technology landscape evolves, continuous learning becomes essential. Professionals must stay updated on the latest technologies while letting go of outdated practices. Tech talent responsible for building AI systems must be upskilled in evolving AI technologies. At the same time, employees across the organization need training to collaborate effectively with AI, ensuring seamless integration and success. Whether through internal upskilling or embarking on skills-focused partnerships, investment in talent management will prove crucial to winning the tech-talent gold rush and thriving in 2025 and beyond.


Generative AI is not replacing jobs or hurting wages at all, say economists

The researchers looked at the extent to which company investment in AI has contributed to worker adoption of AI tools, and also how chatbot adoption affected workplace processes. While firm-led investment in AI boosted the adoption of AI tools — saving time for 64 to 90 percent of users across the studied occupations — chatbots had a mixed impact on work quality and satisfaction. The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves." In other words, AI is creating new work that cancels out some potential time savings from using AI in the first place. "One very stark example that it's close to home for me is there are a lot of teachers who now say they spend time trying to detect whether their students are using ChatGPT to cheat on their homework," explained Humlum. He also observed that a lot of workers now say they're spending time reviewing the quality of AI output or writing prompts. Humlum argues that can be spun negatively, as a subtraction from potential productivity gains, or more positively, in the sense that automation tools historically have tended to generate more demand for workers in other tasks. "These new job tasks create new demand for workers, which may boost their wages, if these are more high value added tasks," he said.


Advancing Digital Systems for Inclusive Public Services

Uganda adopted the modular open-source identity platform, MOSIP, two years ago. A small team of 12, with limited technical expertise, began adapting the MOSIP platform to align with Uganda's Registration of Persons Act, gradually building internal capacity. By the time the system integrator was brought in, Uganda incorporated digital public good, DPG, into its legal framework, providing the integrator with a foundation to build upon. This early customization helped shape the legal and technical framework needed to scale the platform. But improvements are needed, particularly in the documentation of the DPG. "Standardization, information security and inclusion were central to our work with MOSIP," Kisembo said. "Consent became a critical focus and is now embedded across the platform, raising awareness about privacy and data protection." ... Nigeria, with a population of approximately 250 million, is taking steps to coordinate its previously fragmented digital systems through a national DPI framework. The country deployed multiple digital solutions over the last 10 to 15 years, which were often developed in silos by different ministries and private sector agencies. In 2023 and 2024, Nigeria developed a strategic framework to unify these systems and guide its DPI adoption. 


Eyes, ears, and now arms: IoT is alive

In just a few years, devices at home and work started including cameras to see and microphones to hear. Now, with new lines of vacuums and emerging humanoid robots, devices have appendages to manipulate the world around them. They’re not only able to collect information about their environment but can touch, “feel”, and move it. ... But, knowing the history of smart devices getting hacked, there’s cause for concern. From compromised baby monitors to open video doorbell feeds, bad actors have exploited default passwords and unencrypted communications for years. And now, beyond seeing and hearing, we’re on the verge of letting devices roam around our homes and offices with literal arms. What’s stopping a hacked robot vacuum from tampering with security systems? Or your humanoid helper from opening the front door? ... If developers want robots to become a reality, they need to create confidence in these systems immediately. This means following best practice cybersecurity by enabling peer-to-peer connectivity, outlawing generic credentials, and supporting software throughout the device lifecycle. Likewise, users can more safely participate in the robot revolution by segmenting their home networks, implementing multi-factor authentication, and regularly reviewing device permissions.


How to Launch a Freelance Software Development Career

Finding freelance work can be challenging in many fields, but it tends to be especially difficult for software developers. One reason is that many software development projects do not lend themselves well to a freelancing model because they require a lot of ongoing communication and maintenance. This means that, to freelance successfully as a developer, you'll need to seek out gigs that are sufficiently well-defined and finite in scope that you can complete within a predictable period of time. ... Specifically, you need to envision yourself also as a project manager, a finance director, and an accountant. When you can do these things, it becomes easier not just to freelance profitably, but also to convince prospective clients that you know what you're doing and that they can trust you to complete projects with quality and on time. ... While creating a portfolio may seem obvious enough, one pitfall that new freelancers sometimes run into is being unable to share work due to nondisclosure agreements they sign with clients. When negotiating contracts, avoid this risk by ensuring that you'll retain the right to share any key aspects of a project for the purpose of promoting your own services. Even if clients won't agree to letting you share source code, they'll often at least allow you to show off the end product and discuss at a high level how you approached and completed a project.


Digital twins critical for digital transformation to fly in aerospace

Among the key conclusions were that there was a critical need to examine the standards that currently support the development of digital twins, identify gaps in the governance landscape, and establish expectations for the future. ... The net result will be that stakeholder needs and objectives become more achievable, resulting in affordable solutions that shorten test, demonstration, certification and verification, thereby decreasing lifecycle cost while increasing product performance and availability. Yet the DTC cautioned that cyber security considerations within a digital twin and across its external interfaces must be customisable to suit the environment and risk tolerance of digital twin owners. ... First, the DTC said that evidence suggests a necessity to examine the standards that currently support digital twins, identify gaps in the governance landscape, and set expectations for future standard development. In addition, the research team identified that standardisation challenges exist when developing, integrating and maintaining digital twins during design, production and sustainment. There was also a critical need to identify and manage requirements that support interoperability between digital twins throughout the lifecycle. This recommendation also applied to the more complex SoS Digital Twins development initiatives. Digital twin model calibration needs to be an automated process and should be applicable to dynamically varying model parameters.


Quality begins with planning: Building software with the right mindset

Too often, quality is seen as the responsibility of QA engineers. Developers write the code, QA tests it, and ops teams deploy it. But in high-performing teams, that model no longer works. Quality isn’t one team’s job; it’s everyone’s job. Architects defining system components, developers writing code, product managers defining features, and release managers planning deployments all contribute to delivering a reliable product. When quality is owned by the entire team, testing becomes a collaborative effort. Developers write testable code and contribute to test plans. Product managers clarify edge cases during requirements gathering. Ops engineers prepare for rollback scenarios. This collective approach ensures that no aspect of quality is left to chance. ... One of the biggest causes of software failure isn’t building the wrong way, it’s building the wrong thing. You can write perfectly clean, well-tested code that works exactly as intended and still fail your users if the feature doesn’t solve the right problem. That’s why testing must start with validating the requirements themselves. Do they align with business goals? Are they technically feasible? Have we considered the downstream impact on other systems or components? Have we defined what success looks like?


What Makes You a Unicorn in Your Industry? Start by Mastering These 4 Pillars

First, you have to have the capacity, the skill, to excel in that area. Additionally, you have to learn how to leverage that standout aspect to make it work for you in the marketplace - incorporating it into your branding, spotlighting it in your messaging, maybe even including it in your name. Concise as the notion is, there's actually a lot of breadth and flexibility in it, for when it comes to selecting what you want to do better than anyone else is doing it, your choices are boundless. ... Consumers have gotten quite savvy at sniffing out false sincerity, so when they come across the real thing, they're much more prone to give you their business. Basically, when your client base believes you prioritize your vision, your team and creating an incredible product or service over financial gain, they want to work with you. ... Building and maintaining a remarkable "company culture" can just be a buzzword to you, or you can bring it to life. I can't think of any single factor that makes my company more valuable to my clients than the value I place on my people and the experience I endeavor to provide them by working for me. When my staff feels openly recognized, wholly supported and vitally important to achieving our shared outcomes, we're truly unstoppable. So keep in mind that your unicorn focus can be internal, not necessarily client-facing.



Conquering the costs and complexity of cloud, Kubernetes, and AI

While IT leaders clearly see the value in platform teams—nine in 10 organizations have a defined platform engineering team—there’s a clear disconnect between recognizing their importance and enabling their success. This gap signals major stumbling blocks ahead that risk derailing platform team initiatives if not addressed early and strategically. For example, platform teams find themselves burdened by constant manual monitoring, limited visibility into expenses, and a lack of standardization across environments. These challenges are only amplified by the introduction of new and complex AI projects. ... Platform teams that manually juggle cost monitoring across cloud, Kubernetes, and AI initiatives find themselves stretched thin and trapped in a tactical loop of managing complex multi-cluster Kubernetes environments. This prevents them from driving strategic initiatives that could actually transform their organizations’ capabilities. These challenges reflect the overall complexity of modern cloud, Kubernetes, and AI environments. While platform teams are chartered with providing infrastructure and tools necessary to empower efficient development, many resort to short-term patchwork solutions without a cohesive strategy. 


Reporting lines: Could separating from IT help CISOs?

CFOs may be primarily concerned with the financial performance of the business, but they also play a key role in managing organizational risk. This is where CISOs can learn the tradecraft in translating technical measures into business risk management. ... “A CFO comes through the finance ranks without a lot of exposure to IT and I can see how they’re incentivized to hit targets and forecasts, rather than thinking: if I spend another two million on cyber risk mitigation, I may save 20 million in three years’ time because an incident was prevented,” says Schat. Budgeting and forecasting cycles can be a mystery to CISOs, who may engage with the CFO infrequently, and interactions are mostly transactional around budget sign-off on cybersecurity initiatives, according to Gartner. ... It’s not uncommon for CISOs to find security seen as a barrier, where the benefits aren’t always obvious, and are actually at odds with the metrics that drive the CIO. “Security might slow down a project, introduce a layer of complexity that we need from a security perspective, but it doesn’t obviously help the customer,” says Bennett. Reporting to CFOs can relieve potential conflicts of interest. It can allow CISOs to broaden their involvement across all areas of the organization, beyond input in technology, because security and managing risk is a whole-of-business mission.

Daily Tech Digest - March 12, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you made them feel." -- Mary Kay Ash



Rethinking Firewall and Proxy Management for Enterprise Agility

Firewall and proxy management follows a simple rule: block all ports by default and allow only essential traffic. Recognizing that developers understand their applications best, why not empower them to manage firewall and proxy changes as part of a “shift security left” strategy? In practice, however, tight deadlines often lead developers to implement overly broad connectivity – opening up to the complete internet – with plans to refine later. Temporary fixes, if left unchecked, can evolve into serious vulnerabilities. Every security specialist understands what happens in practice. When deadlines are tight, developers may be tempted to take shortcuts. Instead of figuring out the exact needed IP range, they open connectivity to the entire internet with the intention of fixing this later. ... Periodically auditing firewall and proxy rule sets is essential to maintaining security, but it is not a substitute for a robust approval process. Firewalls and proxies are exposed to external threats, and attackers might exploit misconfigurations before periodic audits catch them. Blocking insecure connections on a firewall when the application is already live requires re-architecting the solution, which is costly and time-consuming. Thus, preventing risky changes must be the priority.


Multicloud: Tips for getting it right

It’s obvious that a multicloud strategy — regardless of what it actually looks like — will further increase complexity. This is simply because each cloud platform works with its own management tools, security protocols and performance metrics. Anyone who wants to integrate multicloud into their IT landscape needs a robust management system that can handle the specific requirements of the different environments while ensuring an overview and control across all platforms. This is necessary not only for reasons of handling and performance but also to be as free as possible when choosing the optimal provider for the respective application scenario. This requires cross-platform technologies and tools. The large hyperscalers do provide interfaces for data exchange with other platforms as standard. ... In general, anyone pursuing a multicloud strategy should take steps in advance to ensure that complexity does not lead to chaos but to more efficient IT processes. Security is one of the main issues. And it is twofold: on the one hand, the networked services must be protected in themselves and within their respective platforms. On the other hand, the entire construct with its various architectures and systems must be secure. It is well known that the interfaces are potential gateways for unwelcome “guests”.


FinOps and AI: A Winning Strategy for Cost-Efficient Growth

FinOps is a management approach focused on shared responsibility for cloud computing infrastructure and related costs. ... Companies are attempting to drink from the AI firehose, and unfortunately, they’re creating AI strategies in real-time as they rush to drive revenue and staff productivity. Ideally, you want a foundation in place before using AI in operations. This should include an emphasis on cost management, resource allocation, and keeping tabs on ROI. This is also the focus of FinOps, which can prevent errors and improve processes to further AI adoption. ... To begin, companies should create a budget and forecast the AI projects they want to take on. This planning is a pillar of FinOps and should accurately assess the total cost of initiatives, emphasizing resource allocation (including staffing) and eliminating billing overruns. Cost optimization can also help identify opportunities and reduce expenses. The new focus on AI services in the cloud could drive scalability and cost efficiency as they are much more sensitive to overruns and inefficient usage. Even if organizations are not implementing AI into end-user workloads, there is still an opportunity to craft internal systems utilizing AI to help identify operational efficiencies and implement cost controls on existing infrastructure.


3 Signs Your Startup Needs a CTO — But Not As a Full-Time Hire

CTO as a service provides businesses with access to experienced technical leadership without the commitment of a full-time hire. This model allows startups to leverage specialized expertise on an as-needed basis. ... An on-demand expert can bridge this gap by offering leadership that goes beyond programming. This model provides access to strategic guidance on technology choices, project architecture and team dynamics. During a growth phase, mistakes in management won't be forgiven. ... Hiring a full-time CTO can strain tight budgets, diverting funds from critical areas like product development and market expansion. However, with the CTO as a service model, companies can access top-tier expertise tailored to their financial capabilities. This flexibility allows startups to engage a tech strategist on a project basis, paying only for the high-quality leadership they need when they need it (and if needed). ... Engaging outsourced expertise offers a viable solution, providing a fresh perspective on existing challenges at a cost that remains accessible, even amid resource constraints. This strategic move allows businesses to tap into a wealth of external knowledge, leveraging insights gained from diverse industry experiences. Such an external viewpoint can be invaluable, especially when navigating complex technical hurdles, ensuring that projects not only survive but thrive. 


How to Turn Developer Team Friction Into a Positive Force

Developer team friction, while often seen as a negative trait, can actually become a positive force under certain conditions, McGinnis says. "Friction can enhance problem-solving abilities by highlighting weaknesses in current processes or solutions," he explains. "It prompts the team to address these issues, thereby improving their overall problem-solving skills." Team friction often occurs when a developer passionately advocates a new approach or solution. ... Friction can easily spiral out of control when retrospectives and feedback focus on individuals instead of addressing issues and problems jointly as a team. "Staying solution-oriented and helping each other achieve collective success for the sake of the team, should always be the No. 1 priority," Miears says. "Make it a safe space." As a leader it's important to empower every team member to speak up, Beck advises. Each team member has a different and unique perspective. "For instance, you could have one brilliant engineer who rarely speaks up, but when they do it’s important that people listen," he says. "At other times, you may have an outspoken member on your team who will speak on every issue and argue for their point, regardless of the situation." 


Enterprise Architecture in the Digital Age: Navigating Challenges and Unleashing Transformative Potential

EA is about crafting a comprehensive, composable, and agile architecture-aligned blueprint that synchronizes an organization’s business processes, workforce, and technology with its strategic vision. Rooted in frameworks like TOGAF, it transcends IT, embedding itself into the very heart of a business. ... In this digital age, EA’s role is more critical than ever. It’s not just about maintaining systems; it’s about equipping organizations—whether agile startups or sprawling, successful enterprises—for the disruptions driven by rapid technological evolution and innovation. ... As we navigate inevitable future complexities, Enterprise Architecture stands as a critical differentiator between organizations that merely survive digital disruption and those that harness it for competitive advantage. The most successful implementations of EA share common characteristics: they integrate technical depth with business acumen, maintain adaptable governance frameworks, and continuously measure impact through concrete metrics. These aren’t abstract benefits—they represent tangible business outcomes that directly impact market position and financial performance. Looking forward, EA will increasingly focus on orchestrating complex ecosystems rather than simply mapping them. 


Generative AI Drives Emphasis on Unstructured Data Security

As organizations pivot their focus, the demand for vendors specializing in security solutions, such as data classification, encryption and access control, tailored to unstructured data is expected to increase. This increased demand reflects the necessity for robust and adaptable security measures that can effectively protect the vast and varied types of unstructured data organizations now manage. In tandem with this shift, the rising significance of unstructured data in driving business value and innovation compels organizations to develop expertise in unstructured data security. ... Organizations should prioritize investment in security controls specifically designed for unstructured data. This includes tools with advanced capabilities such as rapid data classification, entitlement management and unclassified data redaction. Solutions that offer prompt engineering and output filtering can also further enhance data security measures. ... Building a knowledgeable team is crucial for managing unstructured data security. Organizations should invest in staffing, training and development to cultivate expertise in this area. This involves hiring data security professionals with specialized skills and providing ongoing education to ensure they are equipped to handle the unique challenges associated with unstructured data. 


Quantum Pulses Could Help Preserve Qubit Stability, Researchers Report

The researchers used a model of two independent qubits, each interacting with its own environment through a process called pure dephasing. This form of decoherence arises from random fluctuations in the qubit’s surroundings, which gradually disrupt its quantum state. The study analyzed how different configurations of PDD pulses — applying them to one qubit versus both — affected the system’s evolution. By employing mathematical models that calculate the quantum speed limit based on changes in quantum coherence, the team measured the impact of periodic pulses on the system’s stability. When pulses were applied to both qubits, they observed a near-complete suppression of dephasing, while applying pulses to just one qubit provided partial protection. Importantly, the researchers investigated the effects of different pulse frequencies and durations to determine the optimal conditions for coherence preservation. ... While the study presents promising results, the effectiveness of PDD depends on the ability to deliver precise, high-frequency pulses. Practical quantum computing systems must contend with hardware limitations, such as pulse imperfections and operational noise, which could reduce the technique’s efficiency.


Disaster Recovery Plan for DevOps

While developing your disaster recovery Plan for your DevOps stack, it’s worth considering the challenges DevOps face in this view. DevOps ecosystems always have complex architecture, like interconnected pipelines and environments (e., GitHub and Jira integration). Thus, a single failure, whether due to a corrupted artifact or a ransomware attack, can cascade through the entire system. Moreover, the rapid development of DevOps creates constant changes, which can complicate data consistency and integrity checks during the recovery process. Another issue is data retention policies. SaaS tools often impose limited retention periods – usually, they vary from 30 to 365 days. ... your backup solution should allow you to:Automate your backups, by scheduling them with the most appropriate interval between backup copies, so that no data is lost in the event of failure,
Provide long-term or even unlimited retention, which will help you to restore data from any point in time. Apply the 3-2-1 backup rule and ensure replication between all the storages, so that in case one of the backup locations fails, you can run your backup from another one. Ransomware protection, which includes AES encryption with your own encryption key, immutable backups, restore and DR capabilities


The state of ransomware: Fragmented but still potent despite takedowns

“Law enforcement takedowns have disrupted major groups like LockBit but newly formed groups quickly emerge akin to a good old-fashioned game of whack-a-mole,” said Jake Moore, global cybersecurity advisor at ESET. “Double and triple extortion, including data leaks and DDoS threats, are now extremely common, and ransomware-as-a-service models make attacks even easier to launch, even by inexperienced criminals.” Moore added: “Law enforcement agencies have struggled over the years to take control of this growing situation as it is costly and resource heavy to even attempt to take down a major criminal network.” ... Meanwhile, enterprises are taking proactive measures to defend against ransomware attacks. These include implementing zero trust architectures, enhancing endpoint detection and response (EDR) solutions, and conducting regular exercises to improve incident response readiness. Anna Chung, principal researcher at Palo Alto Networks’ Unit 42, told CSO that advanced tools such as next-gen firewalls, immutable backups, and cloud redundancies, while keeping systems regularly patched, can help defend against cyberattacks. Greater use of gen AI technologies by attackers is likely to bring further challenges, Chung warned. 

Daily Tech Digest - February 13, 2025


Quote for the day:

"Coaching is unlocking a person's potential to maximize their own performance. It is helping them to learn rather than teaching them." -- John Whitmore


The cloud giants stumble

The challenge for Amazon, Microsoft, and Google will be to adapt their strategies to this evolving landscape. They’ll need to address concerns about costs, provide more flexible deployment options, and develop compelling AI solutions that deliver clear value to enterprises. Without these changes, they may continue to see their growth rates decline as organizations increasingly turn to alternative solutions that better meet their specific needs. This does not mean failure for Big Cloud, but they will take a few years to figure out what’s important to their market. They are a bit off-target now. The rise of specialized providers and the growing acceptance of private cloud solutions means enterprises can be more selective, choosing fit-for-purpose options rather than forcing all workloads into a one-size-fits-all public cloud model that may not be cost-effective. This is particularly relevant for AI initiatives, where specialized infrastructure providers often deliver better value. This freedom of choice comes with increased responsibility. Enterprises must develop more substantial in-house expertise to effectively evaluate and manage multiple infrastructure options. ... The key takeaway is clear: Enterprises are entering an era where they can build infrastructure strategies based on their specific needs rather than vendor limitations. 


Lines Between Nation-State and Cybercrime Groups Disappearing

“The vast cybercriminal ecosystem has acted as an accelerant for state-sponsored hacking, providing malware, vulnerabilities, and in some cases full-spectrum operations to states,” said Ben Read, senior manager at Google Threat Intelligence Group, which includes the Mandiant Intelligence and Threat Analysis Group teams. “These capabilities can be cheaper and more deniable than those developed directly by a state.” ... While nation-states for years have leveraged cybercriminals and their tools, the trend has accelerated since Russia launched its ongoing invasion of neighboring Ukraine in 2022, illustrating that at times of heightened need, financially motivated groups can be used to help the cause of countries. Nation-states can buy cyber capabilities from cybercrime groups or via underground marketplaces. Cybercriminals tend to specialize in certain areas and partner with others with different skills, and the specialization opens opportunities for state-backed actors to be customers that are buying malware and other tools from criminals. “Purchasing malware, credentials, or other key resources from illicit forums can be cheaper for state-backed groups than developing them in-house, while also providing some ability to blend in to financially motivated operations and attract less notice,” the researchers wrote.


Agentic AI vs. generative AI

Generative AI is artificial intelligence that can create original content—such as text, images, video, audio or software code—in response to a user’s prompt or request. Gen AI relies on using machine learning models called deep learning models—algorithms that simulate the learning and decision-making processes of the human brain—and other technologies like robotic process automation (RPA). These models work by identifying and encoding the patterns and relationships in huge amounts of data, and then using that information to understand users' natural language requests or questions. These models can then generate high-quality text, images, and other content based on the data they were trained on in real-time. Agentic AI describes AI systems that are designed to autonomously make decisions and act, with the ability to pursue complex goals with limited supervision. It brings together the flexible characteristics of large language models (LLMs) with the accuracy of traditional programming. This type of AI acts autonomously to achieve a goal by using technologies like natural language processing NLPs, machine learning, reinforcement learning and knowledge representation. It’s a proactive AI-powered approach, whereas gen AI is reactive to the users input. Agentic AI can adapt to different or changing situations and has “agency” to make decisions based on context. 


5 AI Mistakes That Could Kill Your Business In 2025

It’s easy for us to get so excited by the hype around AI that we rush out and start spending money on tools, platforms and projects without aligning them with strategic goals and priorities. This inevitably leads to fragmented initiatives that fail to deliver meaningful results or ROI. To avoid this, always “start with strategy” – implementing a strategic plan that clearly shows how any project or initiative will progress your organization towards improving the metrics and hitting the targets that will define your success. ... Assessing the skills and possibilities of training or reskilling, ensuring there is buy-in across the board, and addressing concerns people might have about job security are all critical. ... On the other hand, being slow to pull the plug on projects that aren’t working out can also be a recipe for disaster – potentially turning what should simply be a short, sharp lesson into a long-term waste of time and resources. There’s a reason that “fail fast” has become a mantra in tech circles. Projects should be designed so that their effectiveness can be quickly assessed, and if they aren’t working out, chalk it up to experience and move on to the next one. ... Make no mistake, going full-throttle on AI is expensive – hardware, software, specialist consulting expertise, compute resources, reskilling and upskilling a workforce and scaling projects from pilot to production – none of this comes cheap.


IoT Security: The Smart House Nightmares

One of the biggest challenges in securing IoT devices is the need for more standardization across the industry. With so many different manufacturers producing a wide variety of devices, there’s no universal security standard that all devices must adhere to. This leads to inconsistent security practices and varying levels of protection. Some devices have robust security features, while others may be woefully inadequate. ... Many IoT devices come with default usernames and passwords that are easy to guess. In some cases, these credentials are hardcoded into the device, meaning they can’t be changed even if the user wants to. Unfortunately, many users either don’t realize they should change these defaults or don’t bother. This creates a significant security risk, as these default credentials are often well-known to hackers. A quick search online can reveal the default passwords for thousands of devices, providing cybercriminals with an easy way to gain access to your smart home. ... Another common issue with IoT devices is the lack of regular software updates. Many devices are shipped with outdated firmware that contains known vulnerabilities. These vulnerabilities remain unpatched without regular updates, leaving the devices open to exploitation.


Addressing cost and complexity in cybersecurity compliance and governance

Employees across the ranks need to be trained in cybersecurity practices and made aware of their responsibilities towards security, compliance and governance. There has to be an effective mechanism for ensuring compliance and fixing accountability, and at the same time, a communication, feedback and recognition process for encouraging employee involvement. ... Efficiency apart, technologies such as artificial intelligence (AI), machine learning (ML), cloud, and blockchain are making cybersecurity operations smarter. AI and ML can identify anomalous patterns indicative of potential threats in real-time, and recommend mitigative actions. Cloud provides the required storage and computing infrastructure to house GRC data and applications, and the scalability to expand cybersecurity operations across business entities and geographies. Blockchain provides a secure, transparent and immutable record of GRC data and transactions that can be easily audited. ... The need for cybersecurity compliance and governance is universal, but enterprises need to craft the strategy that’s right for them based on objectives, size, resources, nature of business, compliance obligations in line with applicable jurisdictions operating from, technology landscape etc.


Cyber Fusion: a next generation approach to NIS2 compliance

This is not a one-off box ticking exercise. Organisations will need to persistently test their cybersecurity and response capabilities, conduct regular cyber risk assessments and ensure that clear lines of management and reporting responsibility are defined and in place. Ultimately, organisations need to ensure they can detect and respond faster and more effectively to cybersecurity events. The faster a possible threat is detected, the better an organisation can comply with the regulatory reporting requirements should this evolve into a full blown incident. Importantly, NIS2 highlights the importance of incident reporting and information across industries and along supply chains as being essential for preparing against security threats. As a key requirement of the directive, the voluntary exchange of cybersecurity information is now enshrined as good security practice. ... NIS2 is the EU’s toughest cybersecurity directive to date and compliance depends on undergoing a multi-step process that includes understanding the scope; connecting with relevant authorities; undertaking a gap analysis; creating new and updated policies; training the right employees; and monitoring progress. All of which will enable businesses to track their supply chain for threats and vulnerabilities and stay on top of their risk management strategies.


The DPDP Act, 2023 and the Draft DPDP Rules, 2025: What Do They Mean for India’s AI Start-Ups?

Some of the reasonable security measures under the Draft DPDP Rules include implementing measures like encryption, obfuscation, masking or the use of virtual tokens mapped to specific personal data. Further regular security audits, vulnerability assessments, and penetration testing to identify and address potential risks form a part of the organizational measures that may be undertaken. Ensuring that sufficient security measures are taken by AI startups to secure their AI model is crucial. ... The Act requires organizations to retain personal data only for as long as necessary to fulfil the purposes for which it was collected. They must establish and implement clear policies for data retention that align with these guidelines. The draft DPDP Rules provide for specific data retention periods based on the purpose for which the data is being collected and processed. Once the data is no longer needed, they should ensure its secure deletion or anonymization to prevent unauthorized access or misuse. Data Principals must be informed 48 hours before their data is to be erased. This process can include automated systems for tracking data lifecycles, conducting regular audits to identify redundant data, and securely erasing it in compliance with industry best practices.


"Blatantly unlawful and horrifically intrusive" data collection is everywhere – how to fight back

Fielding called for "some actual regulation from the actual regulator," and said "as long as it's more profitable and easier to break the law than not, then businesses will." "We cannot expect commercial incentives to save the day for us because they are in direct opposition to the purpose of these laws, which is human rights, human dignity," she added. The Information Commissioners Office (ICO) has stressed that non-essential cookies shouldn't be deployed on user's devices if they haven't actively given consent. It has also said organisations must make it as easy for users to "reject all" as it is to "accept all." ... "Shame" was something championed by Fielding. She commented on how using "community" and our networks "to make it socially unacceptable to treat people like this is probably the most powerful thing we have." The defence against the dangers of authoritarianism in tech, or rather facilitated by tech, is local networks, local community, community activism, and community spirit," she said. "Don't expect to change the world, but keep your corner of it safe for you and yours." Raising awareness and sharing the dangers of data tracking and harvesting is vital in educating more people about data privacy and building a wider campaign to protect it.


The UK’s secret iCloud backdoor request: A dangerous step toward Orwellian mass surveillance

The idea of a government backdoor might sound reasonable in theory – after all, should law enforcement not have a way to stop criminals? But in reality, backdoors weaken security for everyone and pose serious risks: ... Once a vulnerability is created, it will be exploited – by criminals, hostile nations and even corrupt insiders. The UK government might claim it will only use the backdoor responsibly, but history shows that security loopholes do not stay secret for long. The history also shows that provisions in law to lower privacy in just extreme cases have been abused and the threshold to use them has lowered. For example, some local UK councils have been found using CCTV under Regulation of Investigatory Powers Act (RIPA) to monitor minor offences such as littering, dog fouling, and school catchment fraud. ... Allowing the UK government access to iCloud data could set a dangerous precedent. If Apple complies, other countries – China, Russia, Saudi Arabia – will demand the same. The moment a backdoor is created, Apple loses control over who can access it. I have seen what happens when governments have unchecked power. In former Czechoslovakia, the state monitored citizens, controlled the media and crushed dissent.