Showing posts with label smart home. Show all posts
Showing posts with label smart home. Show all posts

Daily Tech Digest - October 30, 2025


Quote for the day:

"Leadership is like beauty; it's hard to define, but you know it when you see it." -- Warren Bennis



Why CIOs need to master the art of adaptation

Adaptability sounds simple in theory, but when and how CIOs should walk away from tested tools and procedures is another matter. ... “If those criteria are clear, then saying no to a vendor or not yet to a CEO is measurable and people can see the reasoning, rather than it feeling arbitrary,” says Dimitri Osler ... Not every piece of wisdom about adaptability deserves to be followed. Mantras like fail fast sound inspiring but can lead CIOs astray. The risk is spreading teams too thin, chasing fads, and losing sight of real priorities. “The most overrated advice is this idea you immediately have to adopt everything new or risk being left behind,” says Osler. “In practice, reckless adoption just creates technical and cultural debt that slows you down later.” Another piece of advice he’d challenge is the idea of constant reorganization. “Change for the sake of change doesn’t make teams more adaptive,” he says. “It destabilizes them.” Real adaptability comes from anchored adjustments, where every shift is tied to a purpose, otherwise, you’re just creating motion without progress, Osler adds. ... A powerful way to build adaptability is to create a culture of constant learning, in which employees at all levels are expected to grow. This can be achieved by seeing change as an opportunity, not a disruption. Structures like flatter hierarchies can also play a role because they can enable fast decision-making and give people the confidence to respond to shifting circumstances, Madanchian adds.


Building Responsible Agentic AI Architecture

The architecture of agentic AI with guardrails defines how intelligent systems progress from understanding intent to taking action—all while being continuously monitored for compliance, contextual accuracy, and ethical safety. At its core, this architecture is not just about enabling autonomy but about establishing structured accountability. Each layer builds upon the previous one to ensure that the AI system functions within defined operational, ethical, and regulatory boundaries. ... Implementing agentic guardrails requires a combination of technical, architectural, and governance components that work together to ensure AI systems operate safely and reliably. These components span across multiple layers — from data ingestion and prompt handling to reasoning validation and continuous monitoring — forming a cohesive control infrastructure for responsible AI behavior.​ ... The deployment of AI guardrails spans nearly every major industry where automation, decision-making, and compliance intersect. Guardrails act as the architectural assurance layer that ensures AI systems operate safely, ethically, and within regulatory and operational constraints. ... While agentic AI holds extraordinary potential, recent failures across industries underscore the need for comprehensive governance frameworks, robust integration strategies, and explicit success criteria. 


Decoding Black Box AI: The Global Push for Explainability and Transparency

The relationship between regulatory requirements and standards development highlights the connection between legal, technical, and institutional domains. Regulations like the AI Act can guide standardization, while standards help put regulatory principles into practice across different regions. Yet, on a global level, we mostly see recognition of the importance of explainability and encouragement of standards, rather than detailed or universally adopted rules. To bridge this gap, further research and global coordination are needed to harmonize emerging standards with regulatory frameworks, ultimately ensuring that explainability is effectively addressed as AI technologies proliferate across borders. ... However, in practice, several of these strategies tend to equate explainability primarily with technical transparency. They often frame solutions in terms of making AI systems’ inner workings more accessible to technical experts, rather than addressing broader societal or ethical dimensions. ... Transparency initiatives are increasingly recognized in fostering stakeholder trust and promoting the adoption of AI technologies, especially when clear regulatory directives on AI explainability are not developed yet. By providing stakeholders with visibility into the underlying algorithms and data usage, these initiatives demystify AI systems and serve as foundational elements for building credibility and accountability within organizations.


How neighbors could spy on smart homes

Even with strong wireless encryption, privacy in connected homes may be thinner than expected. A new study from Leipzig University shows that someone in an adjacent apartment could learn personal details about a household without breaking any encryption. ... the analysis focused on what leaks through side channels, the parts of communication that remain visible even when payloads are protected. Every wireless packet exposes timing, size, and signal strength. By watching these details over time, the researcher could map out daily routines. ... Given the black box nature of this passive monitoring, even if the CSI was accurate, you would have no ground truth to ‘decode’ the readings to assign them to human behavior. So technically it would be advantageous, but you would have a hard time in classifying this data.” Once these patterns were established, a passive observer could tell when someone was awake, working, cooking, or relaxing. Activity peaks from a smart speaker or streaming box pointed to media consumption, while long quiet periods matched sleeping hours. None of this required access to the home’s WiFi network. ... The findings show that privacy exposure in smart homes goes beyond traditional hacking. Even with WPA2 or WPA3 encryption, network traffic leaks enough side information for outsiders to make inferences about occupants. A determined observer could build profiles of daily schedules, detect absences, and learn which devices are in use.


Ransom payment rates drop to historic low as attackers adapt

The economics of ransomware are changing rapidly. Historically, attackers relied on broad access through vulnerabilities and credentials, operating with low overheads. The introduction of the RaaS model allowed for greater scalability, but also brought increased costs associated with access brokers, data storage, and operational logistics. Over time, this has eroded profit margins and fractured trust among affiliates, leading some groups to abandon ransomware in favour of data-theft-only operations. Recent industry upheaval, including the collapse of prominent RaaS brands in 2024, has further destabilised the market. ... In Q3 2025, both the average ransom payment (USD $376,941) and median payment (USD $140,000) dropped sharply by 66% and 65% respectively compared with the previous quarter. Payment rates also fell to a historic low of 23% across incidents involving encryption, data exfiltration, and other forms of extortion, underlining the challenges faced by ransomware groups in securing financial rewards. This trend reflects two predominant factors: Large enterprises are increasingly refusing to pay ransoms, and attacks on smaller organisations, which are more likely to pay, generally result in lower sums. The drop in payment rates is even more pronounced in data exfiltration-only incidents, with just 19% resulting in a payout in Q3, down to another record low.


Shadow AI’s Role in Data Breaches

The adoption barrier is nearly zero: no procurement process, no integration meetings, no IT tickets. All it takes is curiosity and an internet connection. Employees see immediate productivity gains, faster answers, better drafts, cleaner code, and the risks feel abstract. Even when policies prohibit certain AI tools, enforcement is tricky. Blocking sites might prevent direct access, but it won’t stop someone from using their phone or personal laptop. The reality is that AI tools are designed for frictionless use, and that very frictionlessness is what makes them so hard to contain. ... For regulated industries, the compliance fallout can be severe. Healthcare providers risk HIPAA violations if patient information is exposed. Financial institutions face penalties for breaking data residency laws. In competitive sectors, leaked product designs or proprietary algorithms can hand rivals an unearned advantage. The reputational hit can be just as damaging, and once customers or partners lose confidence in your data handling, restoring trust becomes a long-term uphill climb. Unlike a breach caused by a known vulnerability, the root cause in shadow AI incidents is often harder to patch because it stems from behavior, not just infrastructure. ... The first instinct might be to ban unapproved AI outright. That approach rarely works long-term. Employees will either find workarounds or disengage from productivity gains entirely, fostering frustration and eroding trust in leadership. 


Deepfake Attacks Are Happening. Here’s How Firms Should Respond

The quality of deepfake technology is increasing “at a dramatic rate,” agrees Will Richmond-Coggan, partner and head of cyber disputes at Freeths LLP. “The result is that there can be less confidence that real-time audio deepfakes, or even video, will be detectable through artefacts and errors as it has been in the past.” Adding to the risk, many people share images and audio recordings of themselves via social media, while some host vlogs or podcasts.  ... As the technology develops, Tigges predicts fake Zoom meetings will become more compelling and interactive. “Interviews with prospective employees and third-party vendors may be malicious, and conventional employees will find themselves battling state sponsored threat actors more regularly in pursuit of their daily remit.” ... User scepticism is critical, agrees Tigges. He recommends "out of band authentication.” “If someone asks to make an IT-related change, ask that person in another communication method. If you're in a Zoom meeting, shoot them a Slack message.” To avoid being caught out by deepfakes, it is also important that employees are willing to challenge authority, says Richmond-Coggan. “Even in an emergency it will be better for someone in leadership to be challenged and made to verify their identity, than the organisation being brought down because someone blindly followed instructions that didn’t make sense to them, or which they were too afraid to challenge.”


Obsidian: SaaS Vendors Must Adopt Security Standards as Threats Grow

The problem is the SaaS vendors tend to set their own rules, he wrote, so security settings and permissions can differ from app to app – hampering risk management – posture management is hobbled by limited-security APIs that restrict visibility into their configurations, and poor logs and data telemetry make threats difficult to detect, investigate, and respond to. “For years, SaaS security has been a one-way street,” Tran wrote. “SaaS vendors cite the shared responsibility model, while customers struggle to secure hundreds of unique applications, each with limited, inconsistent security controls and blind spots.” ... Obsidian’s Tran pointed to the recent breaches of hundreds of Salesforce customers due to OAuth tokens associated with a third party, Salesloft and its Drift AI chat agent, being compromised, allowing the threat actors access into both Salesforce and Google Workspace instances. The incidents illustrated the need for strong security in SaaS environments. “The same cascading risks apply to misconfigured AI agents,” Tran wrote. “We’ve witnessed one agent download over 16 million files while every other user and app combined accounted for just one million. AI agents not only move unprecedented amounts of data, they are often overprivileged. Our data shows 90% of AI agents are over-permissioned in SaaS.” ... Given the rising threats, “SaaS customers are sounding the alarm and demanding greater visibility, guardrails and accountability from vendors to curb these risks,” he wrote.


Why your Technology Spend isn’t Delivering the Productivity you Expected

Firms essentially spend years building technical debt faster than they can pay it down. Even after modernisation projects, they can’t bring themselves to decommission old systems. So they end up running both. This is the vicious cycle. You keep spending to maintain what you have, building more debt, paying what amounts to a complexity tax in time and money. This problem compounds in asset management because most firms are running fragmented systems for different asset classes, with siloed data environments and no comprehensive platform. Integrating anything becomes a nightmare. ... Here’s where it gets interesting, and where most firms stop short. Virtualisation gives you access to data wherever it lives. That’s the foundation. But the real power comes when you layer on a modern investment management platform that maintains bi-temporal records (which track both when something happened and when it was recorded) as well as full audit trails. Now you can query data as it existed at any point in time. Understand exactly how positions and valuations evolved. ... The best data strategy is often the simplest one: connect, don’t copy, govern, then operationalise. This may sound almost too straightforward given the complexity most firms are dealing with. But that’s precisely the point. We’ve overcomplicated data architecture to the point where 80 per cent of our budget goes to maintenance instead of innovation.


Beyond FUD: The Economist's Guide to Defending Your Cybersecurity Budget

Budget conversations often drift toward "Fear, Uncertainty, and Doubt." The language signals urgency without demonstrating scale, which weakens credibility with financially minded executives. Risk programs earn trust when they quantify likelihood and impact using recognized methods for risk assessment and communication. ... Applied to cybersecurity, VaR frames exposure as a distribution of financial outcomes rather than a binary event. A CISO can estimate loss for data disclosure, ransomware downtime, or intellectual-property theft and present a 95% confidence loss figure over a quarterly or annual horizon, aligning the presentation with established financial risk practice. NIST's guidance supports this structure by emphasizing scenario definition, likelihood modeling, and impact estimation that feed enterprise risk records and executive reporting. The result is a definitive change from alarm to analysis. A board hears an exposure stated as a probability-weighted magnitude with a clear confidence level and time frame. The number becomes a defensible metric that fits governance, insurance negotiations, and budget trade-offs governed by enterprise risk appetite. ... ELA quantifies the dollar value of risk reduction attributable to a control. The calculation values avoided losses against calibrated probabilities, producing a defensible benefit line item that aligns with financial reporting. 

Daily Tech Digest - April 02, 2025


Quote for the day:

"People will not change their minds but they will make new decisions based upon new information." -- Orrin Woodward


The smart way to tackle data storage challenges

Data intelligence makes data stored on the X10000 ready for AI applications to use as soon as they are ingested. The company has a demo of this, where the X10000 ingests customer support documents and enables users to instantly ask it relevant natural language questions via a locally hosted version of the DeepSeek LLM. This kind of application wouldn’t be possible with low-speed legacy object storage, says the company. The X10000’s all-NVMe storage architecture helps to support low-latency access to this indexed and vectorized data, avoiding front-end caching bottlenecks. Advances like these provide up to 6x faster performance than the X10000’s leading object storage competitors, according to HPE’s benchmark testing. ... The containerized architecture opens up options for inline and out-of-band software services, such as automated provisioning and life cycle management of storage resources. It is also easier to localize a workload’s data and compute resources, minimizing data movement by enabling workloads to process data in place rather than moving it to other compute nodes. This is an important performance factor in low-latency applications like AI training and inference. Another aspect of container-based workloads is that all workloads can interact with the same object storage layer. 


Talent gap complicates cost-conscious cloud planning

The top strategy so far is what one enterprise calls the “Cloud Team.” You assemble all your people with cloud skills, and your own best software architect, and have the team examine current and proposed cloud applications, looking for a high-level approach that meets business goals. In this process, the team tries to avoid implementation specifics, focusing instead on the notion that a hybrid application has an agile cloud side and a governance-and-sovereignty data center side, and what has to be done is push functionality into the right place. ... To enterprises who tried the Cloud Team, there’s also a deeper lesson. In fact, there are two. Remember the old “the cloud changes everything” claim? Well, it does, but not the way we thought, or at least not as simply and directly as we thought. The economic revolution of the cloud is selective, a set of benefits that has to be carefully fit to business problems in order to deliver the promised gains. Application development overall has to change, to emphasize a strategic-then-tactical flow that top-down design always called for but didn’t always deliver. That’s the first lesson. The second is that the kinds of applications that the cloud changes the most are applications we can’t move there, because they never got implemented anywhere else.


Your smart home may not be as secure as you think

Most smart devices rely on Wi-Fi to communicate. If these devices connect to an unsecured or poorly protected Wi-Fi network, they can become an easy target. Unencrypted networks are especially vulnerable, and hackers can intercept sensitive data, such as passwords or personal information, being transmitted from the devices. ... Many smart devices collect personal data—sometimes more than users realize. Some devices, like voice assistants or security cameras, are constantly listening or recording, which can lead to privacy violations if not properly secured. In some cases, manufacturers don’t encrypt or secure the data they collect, making it easier for malicious actors to exploit it. ... Smart home devices often connect to third-party platforms or other devices. These integrations can create security holes if the third-party services don’t have strong protections in place. A breach in one service could give attackers access to an entire smart home ecosystem. To mitigate this risk, it’s important to review the security practices of any third-party service before integrating it with your IoT devices. ... If your devices support it, always enable 2FA and link your accounts to a reliable authentication app or your mobile number. You can use 2FA with smart home hubs and cloud-based apps that control IoT devices.


Beyond compensation—crafting an experience that retains talent

Looking ahead, the companies that succeed in attracting and retaining top talent will be those that embrace innovation in their Total Rewards strategies. AI-driven personalization is already changing the game—organizations are using AI-powered platforms to tailor benefits to individual employee needs, offering a menu of options such as additional PTO, learning stipends, or wellness perks. Similarly, equity-based compensation models are evolving, with some businesses exploring cryptocurrency-based rewards and fractional ownership opportunities. Sustainability is also becoming a key factor in Total Rewards. Companies that incorporate sustainability-linked incentives, such as carbon footprint reduction rewards or volunteer days, are seeing higher engagement and satisfaction levels. ... Total Rewards is no longer just about compensation—it’s about creating an ecosystem that supports employees in every aspect of their work and life. Companies that adopt the VALUE framework—Variable pay, Aligned well-being benefits, Learning and growth opportunities, Ultimate flexibility, and Engagement-driven recognition—will not only attract top talent but also foster long-term loyalty and satisfaction.


Bridging the Gap Between the CISO & the Board of Directors

Many executives, including board members, may not fully understand the CISO's role. This isn't just a communications gap; it's also an opportunity to build relationships across departments. When CISOs connect security priorities to broader business goals, they show how cybersecurity is a business enabler rather than just an operational cost. ... Often, those in technical roles lack the ability to speak anything other than the language of tech, making it harder to communicate with board members who don't hold tech or cybersecurity expertise. I remember presenting to our board early into my CISO role and, once I was done, seeing some blank stares. The issue wasn't that they didn't care about what I was saying; we just weren't speaking the same language. ... There are many areas in which communication between a board and CISO is important — but there may be none more important than compliance. Data breaches today are not just technical failures. They carry significant legal, financial, and reputational consequences. In this environment, regulatory compliance isn't just a box to check; it's a critical business risk that CISOs must manage, particularly as boards become more aware of the business impact of control failures in cybersecurity.


What does a comprehensive backup strategy look like?

Though backups are rarely needed, they form the foundation of disaster recovery. Milovan follows the classic 3-2-1 rule: three data copies, on two different media types, with one off-site copy. He insists on maintaining multiple copies “just in case.” In addition, NAS users need to update their OS regularly, Synology’s Alexandra Bejan says. “Outdated operating systems are particularly vulnerable there.” Bejan emphasizes the positives from implementing the textbook best practices Ichthus employs. ... One may imagine that smaller enterprises make for easier targets due to their limited IT. However, nothing could be further from the truth. Bejan: “We have observed that the larger the enterprise, the more difficult it is to implement a comprehensive data protection strategy.” She says the primary reason for this lies in the previously fragmented investments in backup infrastructure, where different solutions were procured for various workloads. “These legacy solutions struggle to effectively manage the rapidly growing number of workloads and the increasing data size. At the same time, they require significant human resources for training, with steep learning curves, making self-learning difficult. When personnel are reassigned, considerable time is needed to relearn the system.”


Malicious actors increasingly put privileged identity access to work across attack chains

Many of these credentials are extracted from computers using so-called infostealer malware, malicious programs that scour the operating system and installed applications for saved usernames and passwords, browser session tokens, SSH and VPN certificates, API keys, and more. The advantage of using stolen credentials for initial access is that they require less skill compared to exploiting vulnerabilities in publicly facing applications or tricking users into installing malware from email links or attachments — although these initial access methods remain popular as well. ... “Skilled actors have created tooling that is freely available on the open web, easy to deploy, and designed to specifically target cloud environments,” the Talos researchers found. “Some examples include ROADtools and AAAInternals, publicly available frameworks designed to enumerate Microsoft Entra ID environments. These tools can collect data on users, groups, applications, service principals, and devices, and execute commands.” These are often coupled with techniques designed to exploit the lack of MFA or incorrectly configured MFA. For example, push spray attacks, also known as MFA bombing or MFA fatigue, rely on bombing the user with MFA push notifications on their phones until they get annoyed and approve the login thinking it’s probably the system malfunctioning.


Role of Blockchain in Enhancing Cybersecurity

At its core, a blockchain is a distributed ledger in which each data block is cryptographically connected to its predecessor, forming an unbreakable chain. Without network authorization, modifying or removing data from a blockchain becomes exceedingly difficult. This ensures that conventional data records stay consistent and accurate over time. The architectural structure of blockchain plays a critical role in protecting data integrity. Every single transaction is time-stamped and merged into a block, which is then confirmed and sealed through consensus. This process provides an undeniable record of all activities, simplifying audits and boosting confidence in system reliability. Similarly, blockchain ensures that every financial transaction is correctly documented and easily accessible. This innovation helps prevent record manipulation, double-spending, and other forms of fraud. By combining cryptographic safeguards with a decentralized architecture, it offers an ideal solution to information security. It also significantly reduces risks related to data breaches, hacking, and unauthorized access in the digital realm. Furthermore, blockchain strengthens cybersecurity by addressing concerns about unauthorized access and the rising threat of cyberattacks. 


Thriving in the Second Wave of Big Data Modernization

When businesses want to use big data to power AI solutions – as opposed to the more traditional types of analytics workloads that predominated during the first wave of big data modernization–the problems stemming from poor data management snowball. They transform from mere annoyances or hindrances into show stoppers. ... But in the age of AI, this process would likely instead entail giving the employee access to a generative AI tool that can interpret a question formulated using natural language and generate a response based on the organizational data that the AI was trained on. In this case, data quality or security issues could become very problematic. ... Unfortunately, there is no magic bullet that can cure the types of issues I’ve laid out above. A large part of the solution involves continuing to do the hard work of improving data quality, erecting effective access controls and making data infrastructure even more scalable. As they do these things, however, businesses must pay careful attention to the unique requirements of AI use cases. For example, when they create security controls, they must do so in ways that are recognizable to AI tools, such that the tools will know which types of data should be accessible to which users.


The DevOps Bottleneck: Why IaC Orchestration is the Missing Piece

At the end of the day, instead of eliminating operational burdens, many organizations just shifted them. DevOps, SREs, CloudOps—whatever you call them—these teams still end up being the gatekeepers. They own the application deployment pipelines, infrastructure lifecycle management, and security policies. And like any team, they seek independence and control—not out of malice, but out of necessity. Think about it: If your job is to keep production stable, are you really going to let every dev push infrastructure changes willy-nilly? Of course not. The result? Silos of unique responsibility and sacred internal knowledge. The very teams that were meant to empower developers become blockers instead. ... IaC orchestration isn’t about replacing your existing tools; it’s about making them work at scale. Think about how GitHub changed software development. Version control wasn’t new—but GitHub made it easier to collaborate, review code, and manage contributions without stepping on each other’s work. That’s exactly what orchestration does for IaC. It allows large teams to manage complex infrastructure without turning into a bottleneck. It enforces guardrails while enabling self-service for developers. 

Daily Tech Digest - February 13, 2025


Quote for the day:

"Coaching is unlocking a person's potential to maximize their own performance. It is helping them to learn rather than teaching them." -- John Whitmore


The cloud giants stumble

The challenge for Amazon, Microsoft, and Google will be to adapt their strategies to this evolving landscape. They’ll need to address concerns about costs, provide more flexible deployment options, and develop compelling AI solutions that deliver clear value to enterprises. Without these changes, they may continue to see their growth rates decline as organizations increasingly turn to alternative solutions that better meet their specific needs. This does not mean failure for Big Cloud, but they will take a few years to figure out what’s important to their market. They are a bit off-target now. The rise of specialized providers and the growing acceptance of private cloud solutions means enterprises can be more selective, choosing fit-for-purpose options rather than forcing all workloads into a one-size-fits-all public cloud model that may not be cost-effective. This is particularly relevant for AI initiatives, where specialized infrastructure providers often deliver better value. This freedom of choice comes with increased responsibility. Enterprises must develop more substantial in-house expertise to effectively evaluate and manage multiple infrastructure options. ... The key takeaway is clear: Enterprises are entering an era where they can build infrastructure strategies based on their specific needs rather than vendor limitations. 


Lines Between Nation-State and Cybercrime Groups Disappearing

“The vast cybercriminal ecosystem has acted as an accelerant for state-sponsored hacking, providing malware, vulnerabilities, and in some cases full-spectrum operations to states,” said Ben Read, senior manager at Google Threat Intelligence Group, which includes the Mandiant Intelligence and Threat Analysis Group teams. “These capabilities can be cheaper and more deniable than those developed directly by a state.” ... While nation-states for years have leveraged cybercriminals and their tools, the trend has accelerated since Russia launched its ongoing invasion of neighboring Ukraine in 2022, illustrating that at times of heightened need, financially motivated groups can be used to help the cause of countries. Nation-states can buy cyber capabilities from cybercrime groups or via underground marketplaces. Cybercriminals tend to specialize in certain areas and partner with others with different skills, and the specialization opens opportunities for state-backed actors to be customers that are buying malware and other tools from criminals. “Purchasing malware, credentials, or other key resources from illicit forums can be cheaper for state-backed groups than developing them in-house, while also providing some ability to blend in to financially motivated operations and attract less notice,” the researchers wrote.


Agentic AI vs. generative AI

Generative AI is artificial intelligence that can create original content—such as text, images, video, audio or software code—in response to a user’s prompt or request. Gen AI relies on using machine learning models called deep learning models—algorithms that simulate the learning and decision-making processes of the human brain—and other technologies like robotic process automation (RPA). These models work by identifying and encoding the patterns and relationships in huge amounts of data, and then using that information to understand users' natural language requests or questions. These models can then generate high-quality text, images, and other content based on the data they were trained on in real-time. Agentic AI describes AI systems that are designed to autonomously make decisions and act, with the ability to pursue complex goals with limited supervision. It brings together the flexible characteristics of large language models (LLMs) with the accuracy of traditional programming. This type of AI acts autonomously to achieve a goal by using technologies like natural language processing NLPs, machine learning, reinforcement learning and knowledge representation. It’s a proactive AI-powered approach, whereas gen AI is reactive to the users input. Agentic AI can adapt to different or changing situations and has “agency” to make decisions based on context. 


5 AI Mistakes That Could Kill Your Business In 2025

It’s easy for us to get so excited by the hype around AI that we rush out and start spending money on tools, platforms and projects without aligning them with strategic goals and priorities. This inevitably leads to fragmented initiatives that fail to deliver meaningful results or ROI. To avoid this, always “start with strategy” – implementing a strategic plan that clearly shows how any project or initiative will progress your organization towards improving the metrics and hitting the targets that will define your success. ... Assessing the skills and possibilities of training or reskilling, ensuring there is buy-in across the board, and addressing concerns people might have about job security are all critical. ... On the other hand, being slow to pull the plug on projects that aren’t working out can also be a recipe for disaster – potentially turning what should simply be a short, sharp lesson into a long-term waste of time and resources. There’s a reason that “fail fast” has become a mantra in tech circles. Projects should be designed so that their effectiveness can be quickly assessed, and if they aren’t working out, chalk it up to experience and move on to the next one. ... Make no mistake, going full-throttle on AI is expensive – hardware, software, specialist consulting expertise, compute resources, reskilling and upskilling a workforce and scaling projects from pilot to production – none of this comes cheap.


IoT Security: The Smart House Nightmares

One of the biggest challenges in securing IoT devices is the need for more standardization across the industry. With so many different manufacturers producing a wide variety of devices, there’s no universal security standard that all devices must adhere to. This leads to inconsistent security practices and varying levels of protection. Some devices have robust security features, while others may be woefully inadequate. ... Many IoT devices come with default usernames and passwords that are easy to guess. In some cases, these credentials are hardcoded into the device, meaning they can’t be changed even if the user wants to. Unfortunately, many users either don’t realize they should change these defaults or don’t bother. This creates a significant security risk, as these default credentials are often well-known to hackers. A quick search online can reveal the default passwords for thousands of devices, providing cybercriminals with an easy way to gain access to your smart home. ... Another common issue with IoT devices is the lack of regular software updates. Many devices are shipped with outdated firmware that contains known vulnerabilities. These vulnerabilities remain unpatched without regular updates, leaving the devices open to exploitation.


Addressing cost and complexity in cybersecurity compliance and governance

Employees across the ranks need to be trained in cybersecurity practices and made aware of their responsibilities towards security, compliance and governance. There has to be an effective mechanism for ensuring compliance and fixing accountability, and at the same time, a communication, feedback and recognition process for encouraging employee involvement. ... Efficiency apart, technologies such as artificial intelligence (AI), machine learning (ML), cloud, and blockchain are making cybersecurity operations smarter. AI and ML can identify anomalous patterns indicative of potential threats in real-time, and recommend mitigative actions. Cloud provides the required storage and computing infrastructure to house GRC data and applications, and the scalability to expand cybersecurity operations across business entities and geographies. Blockchain provides a secure, transparent and immutable record of GRC data and transactions that can be easily audited. ... The need for cybersecurity compliance and governance is universal, but enterprises need to craft the strategy that’s right for them based on objectives, size, resources, nature of business, compliance obligations in line with applicable jurisdictions operating from, technology landscape etc.


Cyber Fusion: a next generation approach to NIS2 compliance

This is not a one-off box ticking exercise. Organisations will need to persistently test their cybersecurity and response capabilities, conduct regular cyber risk assessments and ensure that clear lines of management and reporting responsibility are defined and in place. Ultimately, organisations need to ensure they can detect and respond faster and more effectively to cybersecurity events. The faster a possible threat is detected, the better an organisation can comply with the regulatory reporting requirements should this evolve into a full blown incident. Importantly, NIS2 highlights the importance of incident reporting and information across industries and along supply chains as being essential for preparing against security threats. As a key requirement of the directive, the voluntary exchange of cybersecurity information is now enshrined as good security practice. ... NIS2 is the EU’s toughest cybersecurity directive to date and compliance depends on undergoing a multi-step process that includes understanding the scope; connecting with relevant authorities; undertaking a gap analysis; creating new and updated policies; training the right employees; and monitoring progress. All of which will enable businesses to track their supply chain for threats and vulnerabilities and stay on top of their risk management strategies.


The DPDP Act, 2023 and the Draft DPDP Rules, 2025: What Do They Mean for India’s AI Start-Ups?

Some of the reasonable security measures under the Draft DPDP Rules include implementing measures like encryption, obfuscation, masking or the use of virtual tokens mapped to specific personal data. Further regular security audits, vulnerability assessments, and penetration testing to identify and address potential risks form a part of the organizational measures that may be undertaken. Ensuring that sufficient security measures are taken by AI startups to secure their AI model is crucial. ... The Act requires organizations to retain personal data only for as long as necessary to fulfil the purposes for which it was collected. They must establish and implement clear policies for data retention that align with these guidelines. The draft DPDP Rules provide for specific data retention periods based on the purpose for which the data is being collected and processed. Once the data is no longer needed, they should ensure its secure deletion or anonymization to prevent unauthorized access or misuse. Data Principals must be informed 48 hours before their data is to be erased. This process can include automated systems for tracking data lifecycles, conducting regular audits to identify redundant data, and securely erasing it in compliance with industry best practices.


"Blatantly unlawful and horrifically intrusive" data collection is everywhere – how to fight back

Fielding called for "some actual regulation from the actual regulator," and said "as long as it's more profitable and easier to break the law than not, then businesses will." "We cannot expect commercial incentives to save the day for us because they are in direct opposition to the purpose of these laws, which is human rights, human dignity," she added. The Information Commissioners Office (ICO) has stressed that non-essential cookies shouldn't be deployed on user's devices if they haven't actively given consent. It has also said organisations must make it as easy for users to "reject all" as it is to "accept all." ... "Shame" was something championed by Fielding. She commented on how using "community" and our networks "to make it socially unacceptable to treat people like this is probably the most powerful thing we have." The defence against the dangers of authoritarianism in tech, or rather facilitated by tech, is local networks, local community, community activism, and community spirit," she said. "Don't expect to change the world, but keep your corner of it safe for you and yours." Raising awareness and sharing the dangers of data tracking and harvesting is vital in educating more people about data privacy and building a wider campaign to protect it.


The UK’s secret iCloud backdoor request: A dangerous step toward Orwellian mass surveillance

The idea of a government backdoor might sound reasonable in theory – after all, should law enforcement not have a way to stop criminals? But in reality, backdoors weaken security for everyone and pose serious risks: ... Once a vulnerability is created, it will be exploited – by criminals, hostile nations and even corrupt insiders. The UK government might claim it will only use the backdoor responsibly, but history shows that security loopholes do not stay secret for long. The history also shows that provisions in law to lower privacy in just extreme cases have been abused and the threshold to use them has lowered. For example, some local UK councils have been found using CCTV under Regulation of Investigatory Powers Act (RIPA) to monitor minor offences such as littering, dog fouling, and school catchment fraud. ... Allowing the UK government access to iCloud data could set a dangerous precedent. If Apple complies, other countries – China, Russia, Saudi Arabia – will demand the same. The moment a backdoor is created, Apple loses control over who can access it. I have seen what happens when governments have unchecked power. In former Czechoslovakia, the state monitored citizens, controlled the media and crushed dissent. 

Daily Tech Digest - May 16, 2024

Cultivating cognitive liberty in the age of generative AI

Cognitive liberty is a pivotal component of human flourishing that has been overlooked by traditional theories of liberty—primarily because we have taken for granted that our brains and mental experiences are under our own control. This assumption is being replaced with more nuanced understandings of the human brain and its interaction with our environment, our interactions with others, and our interdependence with technology. Cultivating cognitive liberty in the digital age will become increasingly vital to enable humans to exercise individual agency, nurture human creativity, discern fact and fiction, and reclaim our critical thinking skills amid unprecedented cognitive opportunities and risks. Generative AI tools like GPT-4 pose new challenges to cognitive liberty, including the potential to interfere with and manipulate our mental experiences. They can exacerbate biases and distortions that undermine the integrity and reliability of the information we consume, in turn influencing our beliefs, judgments, and decisions. 


Smart homes, smart choices: How innovation is redefining home furnishing

Most notably, the advent of innovations has made shopping for furniture online a far more enjoyable experience. It begins with options. Today, online furniture websites provide customers with a vastly larger catalog of choices than a brick-and-mortar school could imagine since there are no physical constraints in the digital realm. But vast selections alone are just the beginning. That’s why innovations like AR and VR are so important. Once shoppers identify potential items, AR and VR allow them to view each piece online. They can examine not just static images but pictures from all sides and angles. They can personalize it to fit their style and home. ... First, they understand various key factors, including the origin of the materials being used, how they were made, the labor practices involved, potential environmental impacts, and more. For Wayfair, we are leading the way by including sustainability certifications on approved items as part of our Shop Sustainably commitment. This shift is part of a larger movement called conscious consumerism, where purchasing decisions are made based on those that have positive social, economic, and environmental impacts. 


A Guide to Model Composition

At its core, model composition is a strategy in machine learning that combines multiple models to solve a complex problem that cannot be easily addressed by a single model. This approach leverages the strengths of each individual model, providing more nuanced analyses and improved accuracy. Model composition can be seen as assembling a team of experts, where each member brings specialized knowledge and skills to the table, working together to achieve a common goal. Many real-world problems are too complicated for a one-size-fits-all model. By orchestrating multiple models, each trained to handle specific aspects of a problem or data type, we can create a more comprehensive and effective solution. There are several ways to implement model composition, including but not limited to: Sequential processing: Models are arranged in a pipeline, where the output of one model serves as the input for the next. ... Parallel processing: Multiple models run in parallel, each processing the same input independently. Their outputs are then combined, either by averaging, voting or through a more complex aggregation model, to produce a final result. 


Securing IoT devices is a challenging yet crucial task for CIOs: Silicon Labs CEO

Likewise, as IoT deployments expand, we’ll need scalable infrastructure and solutions capable of accommodating growing device numbers and data volumes. Many countries have their own nuanced regulatory compliance schemes, which add another layer of complexity, especially for data privacy and security regulations. Notably, in India, cost considerations, including initial deployment costs and ongoing maintenance expenses, can be a barrier to adoption, necessitating an understanding of return on investment. ... Silicon Labs has played a key role in advancing IoT and AI adoption through collaborations with industry and academia, including a recent partnership with IIIT-H in India. In 2022, we launched India's first campus-wide Wi-SUN network at the IIIT-H Smart City Living Lab, enabling remote monitoring and control of campus street lamps. This network provides students and researchers with hands-on experience in developing smart city solutions. Silicon Labs also supports STEM education initiatives like Code2College to inspire innovation in the IoT and AI fields.


Cyber resilience: A business imperative CISOs must get right

Often, organizations have more capabilities than they realize, but these resources can be scattered throughout different departments. And each group responsible for establishing cyber resilience might lack full visibility into the existing capabilities within the organization. “Network and security operations have an incredible wealth of intelligence that others would benefit from,” Daniels says. Many companies are integrating cyber resilience into their enterprise risk management processes. They have started taking proactive measures to identify vulnerabilities, assess risks, and implement appropriate controls. “This includes exposure assessment, regular validation such as penetration testing, and continuous monitoring to detect and respond to threats in real-time,” says Angela Zhao, director analyst at Gartner. ... The rise of generative AI as a tool for hackers further complicates organization’s resilience strategies. That’s because generative AI equips even low-skilled individuals with the means to execute complex cyber attacks. As a result, the frequency and severity of attacks might increase, forcing businesses to up their game. 


Is an open-source AI vulnerability next?

The challenges within the AI supply chain mirror those of the broader software supply chain, with added complexity when integrating large language models (LLMs) or machine learning (ML) models into organizational frameworks. For instance, consider a scenario where a financial institution seeks to leverage AI models for loan risk assessment. This application demands meticulous scrutiny of the AI model’s software supply chain and training data origins to ensure compliance with regulatory standards, such as prohibiting protected categories in loan approval processes. To illustrate, let’s examine how a bank integrates AI models into its loan risk assessment procedures. Regulations mandate strict adherence to loan approval criteria, forbidding the use of race, sex, national origin, and other demographics as determining factors. Thus, the bank must consider and assess the AI model’s software and training data supply chain to prevent biases that could lead to legal or regulatory complications. This issue extends beyond individual organizations. The broader AI technology ecosystem faces concerning trends. 


Google’s call-scanning AI could dial up censorship by default, privacy experts warn

Google’s demo of the call scam-detection feature, which the tech giant said would be built into a future version of its Android OS — estimated to run on some three-quarters of the world’s smartphones — is powered by Gemini Nano, the smallest of its current generation of AI models meant to run entirely on-device. This is essentially client-side scanning: A nascent technology that’s generated huge controversy in recent years in relation to efforts to detect child sexual abuse material (CSAM) or even grooming activity on messaging platforms. ... Cryptography expert Matthew Green, a professor at Johns Hopkins, also took to X to raise the alarm. “In the future, AI models will run inference on your texts and voice calls to detect and report illicit behavior,” he warned. “To get your data to pass through service providers, you’ll need to attach a zero-knowledge proof that scanning was conducted. This will block open clients.” Green suggested this dystopian future of censorship by default is only a few years out from being technically possible. “We’re a little ways from this tech being quite efficient enough to realize, but only a few years. A decade at most,” he suggested.


Data strategy? What data strategy?

A recent survey of UKI SAP users found that only 12 percent of respondents had a data strategy that covers their entire organization - these are people who are very likely to be embarking on tricky migrations to S/4HANA. Without properly understanding and governing the data they’re migrating, they’re en route to some serious difficulties. That’s because, more often than not, when a digital transformation project is on the cards, data takes a back seat. In the flurry of deadlines, testing, and troubleshooting, it feels so much more important to get the infrastructure in place and deal with the data later. The single goal is switching on the new system. Fixing the data flaws that caused so many headaches with the old solution is rarely top of the list. But those flaws and headaches are telling you something: your data needs serious attention. Unless you take action, those data silos that slow down decision-making and the data management challenges that are a blocker to innovation will follow you to your new infrastructure.


Designing and developing APIs with TypeSpec

TypeSpec is in wide use inside Microsoft, having spread from its original home in the Azure SDK team to the Microsoft Graph team, among others. Having two of Microsoft’s largest and most important API teams using TypeSpec is a good sign for the rest of us, as it both shows confidence in the toolkit and ensures that the underlying open-source project has an active development community. Certainly, the open-source project, hosted on GitHub, is very active. It recently released TypeSpec 0.56 and has received over 2000 commits. Most of the code is written in TypeScript and compiled to JavaScript so it runs on most development systems. TypeSpec is cross-platform and will run anywhere Node.js runs. Installation is done via npm. While you can use any programmer’s editor to write TypeSpec code, the team recommends using Visual Studio Code, as a TypeSpec extension for VS Code provides a language server and support for common environment variables. This behaves like most VS Code language extensions, giving you diagnostic tools, syntax highlights, and IntelliSense code completion. 


What’s holding CTOs back?

“Obviously, technology strategy and business strategy have to be ultimately driven by the vision of the organization,’’ Jones says, “but it was surprising that over a third of CTOs we surveyed felt they weren’t getting clear vision and guidance.” The CTO role also means different things in different organizations. “The CTO role is so diverse and spans everything from a CTO who works for the CIO and is making the organization more efficient, all the way to creating visibility for the future and transformations,’’ Jones says. ... Plexus Worldwide’s McIntosh says internal politics and some level of bureaucracy are unavoidable for CTOs seeking to push forward technology initiatives. “Navigating and managing this within an organization requires a balance of experience and influence to lessen any potential negative impact,’’ he says. Experienced leaders who have been with a company a long time “are often skilled at understanding the intricate web of relationships, power dynamics, and competing interests that shape internal politics and bureaucratic hurdles,’’ McIntosh says. 



Quote for the day:

"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer

Daily Tech Digest - March 10, 2024

What’s the privacy tax on innovation?

A few decades ago, California had one of the strongest definitions for certifying Organic foods in the US. Eventually, the US government stepped in with a watered-down definition. Despite the pain of new privacy controls, the US data broker industry will lobby for a similar approach to at least harmonize privacy regulations at the Federal level that limit the impact on their business models when operating across state lines. For businesses and consumers, a more equitable approach would be to add a few more teeth to the cost of data misuse arising from legal sales, employee theft, or breaches. A few high-profile payouts arising from theft or when this data is used as part of multi-million dollar ransomware attacks on critical business systems would have a focusing effect on better privacy management practices. Another option is to turn to banks as holders of trust. Banks may be a good first point for managing the financial data we directly share with them. But what about all the data that others gather that may not be tied to traditional identifiers like social security numbers (SSN) used to unify data, such as IP addresses, phone numbers, Wi-Fi hubs, or the trail of GPS dots that gravitate to your home or office?


Living with the ghost of a smart home’s past

There were the window shades that always opened at 8AM and always closed at sundown. My brother disconnected everything that looked like a hub, and still, operating on some inaccessible internal clock, the shades carried on as they were once programmed to do. ... This is the state of home ownership in 2024! People have been making their homes smart with off-the-shelf parts for well over a decade now. Sometimes they sell those homes, and the new homeowners find themselves mired in troubleshooting when they should be trying to pick out wall colors. Some former homeowners will provide onboarding to the home’s smart home system, but most do as the guy who used to own my brother’s house did. They walk away and leave it as an adventure for the next person. ... I really hope the new renters of my old Brooklyn walk-up appreciate all the 2014 Philips Hue lights I left installed in the basement. There’s a calculus you make as you’re moving. It’s a hectic time, and there’s a lot to be done. Do you want to spend half the day freeing all those Hue bulbs from their obnoxious and broken recessed light housings, or do you want to leave a potential gift for the next homeowner and get started on nesting in your new place? 


Overcoming the AI Privacy Predicament

According to one study by Brookings, while 57% of consumers felt that AI will have a net negative impact on privacy, 34% were unsure about how AI would affect their privacy. Indeed, AI evokes a mixed set of thoughts and emotions in consumers. For most people, the promise of AI is clear: from increasing efficiency, to automating mundane tasks and freeing up more time for creative work, to improving outcomes in areas such as healthcare and education. ... In the realm of AI, the lack of trust is significant. Indeed, 81% of consumers think the information collected by AI companies will be used in ways people are uncomfortable with, as well as in ways that were not originally intended. That consumers are put in a seemingly impossible predicament regarding their privacy leaves them little choice but to a.) consent, or b.) forgo use of the product or service. Both choices leave consumers wanting more from the digital economy. When a new technology has negative implications for privacy, consumers have shown they are willing to engage in privacy-protective behaviors, such as deleting an app, withholding personal information, or abandoning an online purchase altogether.


How Static Analysis Can Save Your Software

While static analysis is a means of pattern detection, fixing an actual bug (for example, dereferencing a null pointer) is much harder, albeit possible. It becomes mathematically difficult to track exponentially increasing possible states. We call this “path explosion.” Say you’re writing code that, given two integers, divides one by the other, and there are various failure modes depending on the integers’ values. But what if the denominator is zero? That results in undefined behavior, and it means you need to look at where those integers came from, their possible values and what branches they took along the way. If you can see that the denominator is checked against zero before the division — and branches away if it is — you should be safe from division-by-zero issues. This theoretical stepping through stages of code is called “symbolic execution.” It’s not too complicated if the checkpoint is fairly close to the division process, but the further away it gets, the more branches you must account for. Crossing the function boundary gets even trickier. But once you have calls from other translation units, the problem becomes intractable in the general case. 


Avoiding Shift Left Exhaustion – Part 1

Shift left requires developers to be involved in testing, quality assurance, and collaboration throughout the development cycle. While this is undoubtedly beneficial for the final product, it can lead to an increased workload for developers who must balance their coding responsibilities with testing and problem-solving tasks. ... Adapting to Shift left practices often requires developers to acquire new skills and stay current with the latest testing methodologies and tools. This continuous learning can be intellectually stimulating and exhausting, especially in an industry that evolves rapidly. Developers must understand new tools, processes, and technologies as more things get moved earlier in the development lifecycle. ... The added pressure of early and continuous testing and the demand for faster development cycles can lead to developer burnout. When developers are overburdened, their creativity and productivity may suffer, ultimately impacting the software quality they produce. ... Shifting testing and quality assurance left in the development process may impose strict time constraints. Developers may feel pressured to meet tight deadlines, which can be stressful and lead to rushed decision-making, potentially compromising the software’s quality.


Ransomware Attacks on Critical Infrastructure Are Surging

Especially under fire are critical services. Healthcare and public health agencies dominated, filing 249 reports to IC3 last year over ransomware attacks, followed by 218 reports from critical manufacturing and 156 from government facilities. Ransomware-wielding attackers are potentially targeting these sectors most because they perceive the victims as having a proclivity to pay, given the risk to life or essential business processes posed by their systems being disrupted. Last year, IC3 received a ransomware report from at least one victim in all of the 16 critical infrastructure sectors - which include financial services, food and agriculture, energy and communications - except for two: dams and nuclear reactors, materials and waste. The ransomware group tied to the largest number of successful attacks against critical infrastructure reported to IC3 last year was LockBit, followed by Alphv/BlackCat, Akira, Royal and Black Basta. Law enforcement recently disrupted Alphv/BlackCat, as well as LockBit, after which each group separately claimed to have rebooted before appearing to go dark. 


What’s the missing piece for mainstream Web3 adoption?

Today’s Web3 lacks a unifying ecosystem, causing the market to fracture into multiple, independently evolving use cases. Crypto enthusiasts have to use various decentralized applications (DApps) and platforms to perform multiple transactions and interact with the different sectors of Web3. However, this isn’t a sustainable growth model for the Web3 industry and is more of a deterrent rather than a benefit when it comes to crypto adoption. ... Recognizing the need for a more integrated approach, some Web3 players are moving beyond the hype. Legion Network is emerging as a notable example among these. As a one-stop shop for Web3, Legion Network addresses the complexity of the industry and reaches new audiences. It brings together essential Web3 use cases, including a proprietary crypto wallet with comprehensive portfolio tracking, DeFi swaps and bridges, engaging play-to-earn/win games, captivating quests with prize rewards, a launchpad for emerging projects and a unique SocialFi experience that fosters community engagement.


What’s Driving Changes in Open Source Licensing?

In response to the challenges posed by cloud computing, some vendor-driven open source projects have changed their licenses or their GTM models. For example, MongoDB, Elastic, Confluent, Redis Labs and HashiCorp have adopted new licenses that restrict the use of their software-as-a-service by third parties or require them to pay fees or share their modifications. These changes are intended to protect the revenue and sustainability of the original vendors and to ensure that they can continue to invest in the open source project. However, these changes have also caused some controversy and backlash from the user community, who may feel that the project is becoming less open and more proprietary or that they are losing some of the benefits and freedoms of open source. However, community-driven open source projects have largely maintained their permissive licenses and their collaborative approach. These projects still benefit from the diversity and scale of their user community, who contribute to the development, maintenance, support and security of the software. These projects also leverage the support of organizations and foundations, such as the Linux Foundation, the Apache Software Foundation and the CNCF, who provide governance, funding and infrastructure. 


Botnets: The uninvited guests that just won’t leave

Reducing response time is vital. The longer the dwell time, the more likely it is that botnets can impact a business, particularly given that botnets can spread across many devices in a short period. How can security teams improve detection processes and shrink the time it takes to respond to malicious activity? Security practitioners should have multiple tools and strategies at their disposal to protect their organization’s networks against botnets. An obvious first step is to prevent access to all recognized C2 databases. Next, leverage application control to restrict unauthorized access to your systems. Additionally, use Domain Name System (DNS) filtering to target botnets explicitly, concentrating on each category or website that might expose your system to them. DNS filtering also helps to mitigate the Domain Generation Algorithms that botnets often use. Monitoring data while it enters and leaves devices is vital as well, as you can spot botnets as they attempt to infiltrate your computers or those connected to them. This is what makes security information and event management technology paired with malicious indicators of compromise detections so critical to protecting against bots. 


Are You Ready to Protect Your Company From Insider Threats? Probably Not

The real problem is that employees and employers don’t trust each other. This is an enormous risk for employees, as this environment makes it more likely that insider threats, security risks that originate from within the company, will emerge or intensify when tensions are high and motivations, including financial strain, dissatisfaction or desperation, drive individuals to act against their own organization. That’s the bad news. The worst news is that most companies are unprepared to meet the moment. ... Insider threats often betray their motivation. Sometimes, they tell colleagues about their intentions. Other times, their actions speak louder than words, as attempts to work around security protocols, active resentment for coworkers or leadership or general job dissatisfaction can be a red flag that an insider threat is about to act. Explaining the impact of human intelligence, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) writes, “An organization’s own personnel are an invaluable resource to observe behaviors of concern, as are those who are close to an individual, such as family, friends, and coworkers.”



Quote for the day:

"Leaders must be close enough to relate to others, but far enough ahead to motivate them." -- John C. Maxwell

Daily Tech Digest - October 03, 2023

How AI can be a ‘multivitamin supplement’ for many industries

It won’t replace humans in the same way that supplements don’t replace a healthy diet. Still, it will strengthen companies’ existing operations and fill in the gaps that are currently making work more burdensome for human laborers. ... It’s exciting to realize that there will soon be professions that we don’t even have names for yet. As the technology ages and matures and governing bodies create the necessary laws and regulations, our current state of uncertainty will transform into an exciting, bright new future of human-tech cooperation. We are already seeing this future take shape. For instance, MarTech companies are testing AI-powered fraud detection to supplement the work that human experts do to monitor traffic quality and transparency. This not only eases the human workload but helps companies save resources while getting better results overall. Similar benefits of human-AI collaboration can be seen in healthcare, with AI that can be trained to assist patients with recovery treatments or perform routine tasks in medical offices or hospitals, freeing nurses and doctors up to focus on patient outcomes. 


Banking on Innovation: How Finance Transforms Technological Growth for Decision Makers

Regulation is a sensitive topic for the financial industry. While the need for a certain degree of oversight is universally accepted, excessive regulation can stifle the very innovation that drives economic growth. On the other hand, too little regulation can open the doors to risk accumulation and financial crises. Striking this balance is one of the most challenging tasks that government leaders face. Policies must be evidence-based, derived from transparent risk- assessment models and economic simulations. Regulatory sandboxes could offer a safe environment for financial institutions to experiment with new services and products under the watchful eye of regulators, thereby fostering innovation while ensuring compliance. ... One of the most potent ways in which PPPs can contribute to revenue management is through asset monetization. Governments often sit on a wealth of underutilized assets, ranging from real estate to utilities. A PPP can unlock the value of these assets by involving private-sector expertise and investment. 


Microsoft Releases Its Own Distro of Java 21

Microsoft’s continuing support for OpenJDK is a strong indicator of how important Java is in the enterprise software space. “And the new features of Java 21 such as lightweight threads are maintaining Java’s relevance in the cloud native age,” said Mike Milinkovich, executive director of the Eclipse Foundation. “Being one of the first vendors to ship Java SE 21 support shows how focused Microsoft is in meeting the needs of Java developers deploying workloads on Azure.” Also, Spring developers will be pleased to know that Spring Boot 3.2 now supports Java 21 features. Many other frameworks and libraries will soon release their JDK 21-supported versions. “Microsoft has some of the best developer tool makers in the world — to have them add Java to the mix makes sense,” said Richard Campbell, founder of Campbell & Associates. “Of course, that happened a couple of years ago, and JDK 21 is just the latest implementation. In the end, Microsoft wants to ensure that Azure is a great place to run Java, so having a team working on Java running in Azure helps to make that true. What does it mean for the ecosystem? More choices for implementations of Java, better Java tooling, and more places to run Java fast and securely.”


Why embracing complexity is the real challenge in software today

The reason we can’t just wish away or “fix” complexity is that every solution — whether it’s a technology or methodology — redistributes complexity in some way. Solutions reorganize problems. When microservices emerged (a software architecture approach where an application or system is composed of many smaller parts), they seemingly solved many of the maintenance and development challenges posed by monolithic architectures (where the application is one single interlocking system). However, in doing so microservices placed new demands on engineering teams; they require greater maturity in terms of practices and processes. This is one of the reasons why we cautioned people against what we call “microservice envy” in a 2018 edition of the Technology Radar, with CTO Rebecca Parsons writing that microservices would never be recommended for adoption on Technology Radar because “not all organizations are microservices-ready.” We noticed there was a tendency to look to adopt microservices simply because it was fashionable. This doesn’t mean the solution is poor or defective. 


Balancing Cost and Resilience: Crafting a Lean IT Business Continuity Strategy

Effective monitoring is the backbone of a resilient infrastructure. The approach should focus on: Filtering out the noise - Monitoring solutions need to ensure that only critical notifications are sent out, preventing information overload and ensuring that the right people are alerted promptly when critical events inevitably happen. Acting quickly and decisively - Time is of the essence during disruptions. IT, DevOps, SIRT, and even PR teams need to be well coordinated for various types of events. From security breaches to data center fires or even just mundane equipment failures, anything that might result in customer or operation disruptions will involve cross-team communications and collaboration. The only way to get better at handling these is to have documentation on what should be done, a clear chain of command, and practice drills. In conclusion, a comprehensive backup and recovery strategy is essential for businesses aiming for uninterrupted operations. While there are many solutions available in the market, it’s crucial to find one that aligns with your business needs. 


How do you solve a problem like payments infrastructure?

Today, banks need to be willing to adopt new technology to change, and this will involve working with a third-party service provider. Another roundtable participant added that as part of this process, it is imperative to utilise validation evaluation to recycle new enhancements. Otherwise, banks will end up with the belief that the improvements that were made are unique, but in fact, competitors will keep pace or even get ahead when it comes to the innovation game or enticing new customers. This banker revealed that they opted to not disconnect from their existing infrastructure, but instead chose a top layer architecture to process payments in a more efficient way. In line with this, the participant added that culture must be considered, because this is what brings together the different components that are needed and ultimately reveals when the time is right to change the systems. Providing background information, this Sibos attendee mentioned that 15 years ago, the bank considered whether it would be more cost effective to map local, regional, or global ISO 20022 messaging into existing architecture or to create a new platform that could work for the next 20 years. 


GenAI: friend or foe in fraud risk management?

Building high-performance fraud detection algorithms today is dependent on real-life customer and transaction data to train and validate the models, which has remained a constraint. GenAI can help with realistic synthetic data creation for model training and validation, scenario and fraud attack simulation to identify vulnerabilities and design controls to mitigate these risks. Customer due diligence (CDD) is a critical function in fraud prevention – be it new client onboarding or new credit approvals (loans, credit cards, increasing credit limits) for existing clients. GenAI can be a great tool to go through piles of KYC documentation and reference them with customer-filled forms and other subscribed data sources of the FI to come up with a CDD summary report. GenAI can also be used to analyse user communications with FIs – such as emails, chats, documents and product and service requests – to extract insights on financial behaviour, sentiment analysis for intentions and potential risks of fraud. Fraud investigations can also leverage GenAI for alert and dispute resolution by accessing different sources of information on the context and providing a summary of the case that will aid in its decisioning.


Weaving Cyber Resilience into the Strategic Fabric of Higher Education Institutions

There is no shortage of steps that institutions can take to bolster their cyber resilience and ensure that, should the worst happen, they’re prepared. A good place to start is by assessing the institution’s current level of resilience and looking for any gaps or obstacles. In many cases, Goerlich says, the key is simplification. For example, adopting a zero-trust security strategy can also improve a college or university’s ability to respond, maintain continuity and bounce back following an adverse event, he says. Another factor complicating resiliency for many institutions is overly complex network environments, particularly in the cloud. As colleges and universities clamor to embrace digital transformation and cloud networking, it’s not uncommon for their environments to grow to a degree that becomes unmanageable. But uncontrolled and unregulated cloud sprawl can have a serious impact on an institution’s resilience. Developing easy-to-follow approaches and processes — along with adopting simplified, automated and easy-to-use technology solutions — can make a significant difference, Goerlich says. 


How to make asynchronous collaboration work for your business

Asynchronous working can bring some benefits that synchronous work can't – most notably speed. “Real-time communication means everyone must be in the same place, or at least the same time zone, in order for work to happen. If workers need to wait for syncs to decide or act on something, it slows down the company as a whole and reduces its ability to compete,” says van der Voort. Asynchronous collaboration allows people to work at their own pace, and does not force them to wait for input from others. Morning people, evening people, midnight oil people, collaborating across geographies, can in some cases deliver higher quality results than forcing everyone to come together for a 10am video call. To get this working well, policies such as having core working hours for each staff member, and having very clear goals and anticipated outcomes for all meetings, can be incredibly useful. “One of the most significant and highly sought-after benefits asynchronous collaboration offers is a dramatic reduction in meetings,” argues Lawyer. “It allows team members to contribute in the least amount of minutes, freeing up time for other work.”


Securing the Evolution of Smart Home Applications

Very few in the cybersecurity community have forgotten one of the most noteworthy incidents, the Mirai Botnet, which took place back in 2017. Attackers behind the botnet infiltrated the site of well-known cybersecurity journalist Brian Krebs. The Distributed Denial of Service (DDoS) attack lasted for days, 77 hours to be exact. It involved 24,000 Mirai-infected Internet-of-Things devices, including personal surveillance cameras. Jumping ahead to this year 2023, in June the Federal Trade Commission (FTC) settled a case with Ring’s owner, Amazon. The online retailing giant agreed to pay the FTC nearly $31 million in penalties to settle recently filed federal lawsuits over privacy violations. The FTC alleged that Ring compromised customer privacy by allowing any employee or contractor to access consumers’ private videos. The FTC also claimed hackers used Ring cameras’ two-way functionality to harass and even physically threaten consumers – including children – if they did not pay a ransom. These types of incidents clearly illustrate how critical it is to secure devices like cameras in a smart home.



Quote for the day:

"Before you are a leader, success is all about growing yourself when you become a leader, success is all about growing others." -- Jack Welch