Showing posts with label government IT. Show all posts
Showing posts with label government IT. Show all posts

Daily Tech Digest - July 15, 2025


Quote for the day:

“Rarely have I seen a situation where doing less than the other guy is a good strategy.” -- Jimmy Spithill


CyberArk: Rise in Machine Identities Poses New Risks

The CyberArk report outlines the substantial business consequences of failing to protect machine identities, leaving organizations vulnerable to costly outages and breaches. Seventy-two percent of organizations experienced at least one certificate-related outage over the past year - a sharp increase compared to prior years. Additionally, 50% reported security incidents or breaches stemming from compromised machine identities. Companies that have experienced non-human identity security breaches include xAI, Uber, Schneider Electric, Cloudflare and BeyondTrust, among others. "Machine identities of all kinds will continue to skyrocket over the next year, bringing not only greater complexity but also increased risks," said Kurt Sand, general manager of machine identity security at CyberArk. "Cybercriminals are increasingly targeting machine identities - from API keys to code-signing certificates - to exploit vulnerabilities, compromise systems and disrupt critical infrastructure, leaving even the most advanced businesses dangerously exposed." ... Fifty percent of security leaders reported security incidents or breaches linked to compromised machine identities in the previous year. These incidents led to delays in application launches for 51% companies, customer-impacting outages for 44% and unauthorized access to sensitive systems for 43%.


What Can Businesses Do About Ethical Dilemmas Posed by AI?

Digital discrimination is a product of bias incorporated into the AI algorithms and deployed at various levels of development and deployment. The biases mainly result from the data used to train the large language models (LLMs). If the data reflects previous iniquities or underrepresents certain social groups, the algorithm has the potential to learn and perpetuate those iniquities. Biases may occasionally culminate in contextual abuse when an algorithm is used beyond the environment or audience for which it was intended or trained. Such a mismatch may result in poor predictions, misclassifications, or unfair treatment of particular groups. Lack of monitoring and transparency merely adds to the problem. In the absence of oversight, biased results are not discovered. ... Human-in-the-loop systems allow intervention in real time whenever AI acts unjustly or unexpectedly, thus minimizing potential harm and reinforcing trust. Human judgment makes choices more inclusive and socially sensitive by including cultural, emotional, or situational elements, which AI lacks. When humans remain in the loop of decision-making, accountability is shared and traceable. This removes ethical blind spots and holds users accountable for consequences.


Beyond the hype: AI disruption in India’s legal practice

The competitive dynamics are stark. When AI can complete a ten-hour task in two hours, firms face a pricing paradox: how to maintain profitability while passing efficiency gains to the clients? Traditional hourly billing models become unsustainable when the underlying time economics change dramatically. ... Effective AI integration hinges on a strong technological foundation, encompassing secure data architecture, advanced cybersecurity measures and a seamless and hassle-free interoperability between systems and already existing platforms. SAM’s centralised Harvey AI approach and CAM’s multi-tool strategy both imply significant investment in these backend capabilities. ... Merely automating existing workflows fails to leverage AI’s transformative potential. To unlock AI’s full transformative value, firms must rethink their legal processes – streamlining tasks, reallocating human resources to higher order functions and embedding AI at the core of decision-making processes and document production cycles. ... AI enables alternative service models that go beyond the billable hour. Firms that rethink on how they can price say, by offering subscription-based or outcome-driven services, and position themselves as strategic partners rather than task executors, will be best positioned to capture long-term client value in an AI-first legal economy.


‘Chronodebt’: The lose/lose situation few CIOs can escape

One needn’t be an expert in the field of technical architecture to know that basing a capability as essential as air traffic control on such obviously obsolete technology is a bad idea. Someone should lose their job over this. And yet, nobody has lost their job over this, nor should they have. That’s because the root cause of the FAA’s woes — poor chronodebt management, in case you haven’t been paying attention — is a discipline that’s rarely tracked by reliable metrics and almost-as-rarely budgeted for. Metrics first: While the discipline of IT project estimation is far from reliable, it’s good enough to be useful in estimating chronodebt’s remediation costs — in the FAA’s case what it would have to spend to fix or replace its integrations and the integration platforms on which those integrations rely. That’s good enough, with no need for precision. Those running the FAA for all these years could, that is, estimate the cost of replacing the programs used to export and update its repositories, and replacing the 3 ½” diskettes and paper strips on which they rely. But, telling you what you already know, good business decisions are based not just on estimated costs, but on benefits netted against those costs. The problem with chronodebt is that there are no clear and obvious ways to quantify the benefits to be had by reducing it.


Can System Initiative fix devops?

System Initiative turns traditional devops on its head. It translates what would normally be infrastructure configuration code into data, creating digital twins that model the infrastructure. Actions like restarting servers or running complex deployments are expressed as functions, then chained together in a dynamic, graphical UI. A living diagram of your infrastructure refreshes with your changes. Digital twins allow the system to automatically infer workflows and changes of state. “We’re modeling the world as it is,” says Jacob. For example, when you connect a Docker container to a new Amazon Elastic Container Service instance, System Initiative recognizes the relationship and updates the model accordingly. Developers can turn workflows — like deploying a container on AWS — into reusable models with just a few clicks, improving speed. The GUI-driven platform auto-generates API calls to cloud infrastructure under the hood. ... An abstraction like System Initiative could embrace this flexibility while bringing uniformity to how infrastructure is modeled and operated across clouds. The multicloud implications are especially intriguing, given the rise in adoption of multiple clouds and the scarcity of strong cross-cloud management tools. A visual model of the environment makes it easier for devops teams to collaborate based on a shared understanding, says Jacob — removing bottlenecks, speeding feedback loops, and accelerating time to value.


An exodus evolves: The new digital infrastructure market

Regulatory pressures have crystallised around concerns over reliance on a small number of US-based cloud providers. With some hyperscalers openly admitting that they cannot guarantee data stays within a jurisdiction during transfer, other types of infrastructure make it easier to maintain compliance with UK and EU regulations. This is a clear strategy to avoid future financial and reputational damage. ... 2025 is a pivotal year for digital infrastructure. Public cloud will remain an essential part of the IT landscape. But the future of data strategy lies in making informed, strategic decisions, leveraging the right mix of infrastructure solutions for specific workloads and business needs. As part of our research, we assessed the shape of this hybrid market. ... With one eye to the future, UK-based cloud providers must be positioned as a strategic advantage, offering benefits such as data sovereignty, regulatory compliance, and reduced latency. Businesses will need to situate themselves ever more precisely on the spectrum of digital infrastructure. Their location will reflect how they embrace a hybrid model that balances public cloud, private cloud, colocation and on-premise options. This approach will not only optimise performance and costs but also provide long-term resilience in an evolving digital economy.


How Trump's Cyber Cuts Dismantle Federal Information Sharing

"The budget cuts, personnel reductions and other policy changes have decreased the volume and frequency of CISA's information sharing activities in both formal and informal channels," Daniel told ISMG. While sector-specific ISACs still share information, threat sharing efforts tied to federal funding - such as the Multi-State ISAC, which supports state and local governments - "have been negatively affected," he said . One former CISA staffer who recently accepted the administration's deferred resignation offer told ISMG the agency's information-sharing efforts "were among the first to take a hit" from the administration's cuts, with many feeling pressured into silence. ... Analysts have also warned that cuts to cyber staff across federal agencies and risks to initiatives including the National Vulnerability Database and Common Vulnerabilities and Exposures program could harm cybersecurity far beyond U.S. borders. The CVE program is dealing with backlogs and a recent threat to shut down funding over a federal contracting issue. Failure of the CVE Program "would have wide impacts on vulnerability management efficiency and effectiveness globally," said John Banghart, senior director for cybersecurity services at Venable and a key architect of the Obama administration's cybersecurity policy as a former director for federal cybersecurity for the National Security Council.


Securing vehicles as they become platforms for code and data

Recently security researchers have demonstrated real-world attacks against connected cars, such as wireless brake manipulation on heavy trucks by spoofing J-bus diagnostic packets. Another very recent example is successful attacks against autonomous car LIDAR systems. While the distribution of EV and advanced cars becomes more pervasive across our society, we expect these types of attacks and methods to continue to grow in complexity. Which makes a continuous, real-time approach to securing the entire ecosystem (from charger, to car, to driver) even more so important. ... Over-the-air (OTA) update hijacking is very real and often enabled by poor security design, such as lack of encryption, improper authentication between the car and backend, and lack of integrity or checksum validation. Attack vectors that the traditional computer industry has dealt with for years are now becoming a harsh reality in the automotive sector. Luckily, many of the same approaches used to mitigate these risks in IT can also apply here ... When we look at just the automobile, we have a variety of connected systems which typically all come from different manufacturers (Android Automotive, or QNX as examples) which increases the potential for supply chain abuse. We also have devices which the driver introduces which interacts with the car’s APIs creating new entry points for attackers.


Strategizing with AI: How leaders can upgrade strategic planning with multi-agent platforms

Building resiliency and optionality into a strategic plan challenges humans’ cognitive (and financial) bandwidth. The seemingly endless array of future scenarios, coupled with our own human biases, conspires to anchor our understanding of the future in what we’ve seen in the past. Generative AI (GenAI) can help overcome this common organizational tendency for entrenched thinking, and mitigate the challenges of being human, while exploiting LLMs’ creativity as well as their ability to mirror human behavioral patterns. ... In fact, our argument reflects our own experience using a multi-agent LLM simulation platform built by the BCG Henderson Institute. We’ve used this platform to mirror actual war games and scenario planning sessions we’ve led with clients in the past. As we’ve seen firsthand, what makes an LLM multi-agent simulation so powerful is the possibility of exploiting two unique features of GenAI—its anthropomorphism, or ability to mimic human behavior, and its stochasticity, or creativity. LLMs can role-play in remarkably human-like fashion: Research by Stanford and Google published earlier this year suggests that LLMs are able to simulate individual personalities closely enough to respond to certain types of surveys with 85% accuracy as the individuals themselves.


The Network Challenges of IoT Integration

IoT interoperability and compatible security protocols are a particular challenge. Although NIST and ISO, among other organizations, have issued IoT standards, smaller IoT manufacturers don't always have the resources to follow their guidance. This becomes a network problem because companies have to retool these IoT devices before they can be used on their enterprise networks. Moreover, because many IoT gadgets are delivered with default security settings that are easy to undo, each device has to be hand-configured to ensure it meets company security standards. To avoid potential interoperability pitfalls, network staff should evaluate prospective technology before anything is purchased. ... First, to achieve high QoS, every data pipeline on the network must be analyzed -- as well as every single system, application and network device. Once assessed, each component must be hand-calibrated to run at the highest performance levels possible. This is a detailed and specialized job. Most network staff don't have trained QoS technicians on board, so they must go externally for help. Second, which areas of the business get maximum QoS, and which don't? A medical clinic, for example, requires high QoS to support a telehealth application where doctors and patients communicate. 

Daily Tech Digest - June 24, 2025


Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal


Why Agentic AI Is a Developer's New Ally, Not Adversary

Because agentic AI can complete complex workflows rather than simply generating content, it opens the door to a variety of AI-assisted use cases in software development that extend far beyond writing code — which, to date, has been the main way that software developers have leveraged AI. ... But agentic AI eliminates the need to spell out instructions or carry out manual actions entirely. With just a sentence or two, developers can prompt AI to perform complex, multi-step tasks. It's important to note that, for the most part, agentic AI use cases like those described above remain theoretical. Agentic AI remains a fairly new and quickly evolving field. The technology to do the sorts of things mentioned here theoretically exists, but existing tool sets for enabling specific agentic AI use cases are limited. ... It's also important to note that agentic AI poses new challenges for software developers. One is the risk that AI will make the wrong decisions. Like any LLM-based technology, AI agents can hallucinate, causing them to perform in undesirable ways. For this reason, it's tough to imagine entrusting high-stakes tasks to AI agents without requiring a human to supervise and validate them. Agentic AI also poses security risks. If agentic AI systems are compromised by threat actors, any tools or data that AI agents can access (such as source code) could also be exposed.


Modernizing Identity Security Beyond MFA

The next phase of identity security must focus on phishing-resistant authentication, seamless access, and decentralized identity management. The key principle guiding this transformation is a principle of phishing resistance by design. The adoption of FIDO2 and WebAuthn standards enables passwordless authentication using cryptographic key pairs. Because the private key never leaves the user’s device, attackers cannot intercept it. These methods eliminate the weakest link — human error — by ensuring that authentication remains secure even if users unknowingly interact with malicious links or phishing campaigns. ... By leveraging blockchain-based verified credentials — digitally signed, tamper-evident credentials issued by a trusted entity — wallets enable users to securely authenticate to multiple resources without exposing their personal data to third parties. These credentials can include identity proofs, such as government-issued IDs, employment verification, or certifications, which enable strong authentication. Using them for authentication reduces the risk of identity theft while improving privacy. Modern authentication must allow users to register once and reuse their credentials seamlessly across services. This concept reduces redundant onboarding processes and minimizes the need for multiple authentication methods. 


The Pros and Cons of Becoming a Government CIO

Seeking a job as a government CIO offers a chance to make a real impact on the lives of citizens, says Aparna Achanta, security architect and leader at IBM Consulting -- Federal. CIOs typically lead a wide range of projects, such as upgrading systems in education, public safety, healthcare, and other areas that provide critical public services. "They [government CIOs] work on large-scale projects that benefit communities beyond profits, which can be very rewarding and impactful," Achanta observed in an online interview. "The job also gives you an opportunity for leadership growth and the chance to work with a wide range of departments and people." ... "Being a government CIO might mean dealing with slow processes and bureaucracy," Achanta says. "Most of the time, decisions take longer because they have to go through several layers of approval, which can delay projects.” Government CIOs face unique challenges, including budget constraints, a constantly evolving mission, and increased scrutiny from government leaders and the public. "Public servants must be adept at change management in order to be able to pivot and implement the priorities of their administration to the best of their ability," Tamburrino says. Government CIOs are often frustrated by a hierarchy that runs at a far slower pace than their enterprise counterparts.


Why work-life balance in cybersecurity must start with executive support

Watching your mental and physical health is critical. Setting boundaries is something that helps the entire team, not just as a cyber leader. One rule we have in my team is that we do not use work chat after business hours unless there are critical events. Everyone needs a break and sometimes hearing a text or chat notification can create undue stress. Another critical aspect of being a cybersecurity professional is to hold to your integrity. People often do not like the fact that we have to monitor, report, and investigate systems and human behavior. When we get pushback for this with unprofessional behavior or defensiveness, it can often cause great personal stress. ... Executive leadership plays one of the most critical roles in supporting the CISO. Without executive level support, we would be crushed by the demands and the frequent conflicts of interest we experience. For example, project managers, CIOs, and other IT leadership roles might prioritize budget, cost, timelines, or other needs above security. A security professional prioritizes people (safety) and security above cost or timelines. The nature of our roles requires executive leadership support to balance the security and privacy risk (and what is acceptable to an executive). I think in several instances the executive board and CEOs understand this, but we are still a growing profession and there needs to be more education in this area.


Building Trust in Synthetic Media Through Responsible AI Governance

Relying solely on labeling tools faces multiple operational challenges. First, labeling tools often lack accuracy. This creates a paradox: inaccurate labels may legitimize harmful media, while unlabelled content may appear trustworthy. Moreover, users may not view basic AI edits, such as color correction, as manipulation, while opinions differ on changes like facial adjustments or filters. It remains unclear whether simple colour changes require a label, or if labeling should only occur when media is substantively altered or generated using AI. Similarly, many synthetic media artifacts may not fit the standard definition of pornography, such as images showing white substances on a person’s face; however, they can often be humiliating. ... Second, synthetic media use cases exist on a spectrum, and the presence of mixed AI- and human-generated content adds complexity and uncertainty in moderation strategies. For example, when moderating human-generated media, social media platforms only need to identify and remove harmful material. In the case of synthetic media, it is often necessary to first determine whether the content is AI-generated and then assess its potential harm. This added complexity may lead platforms to adopt overly cautious approaches to avoid liability. These challenges can undermine the effectiveness of labeling.


How future-ready leadership can power business value

Leadership in 2025 requires more than expertise; it demands adaptability, compassion, and tech fluency. “Leadership today isn’t about having all the answers; it’s about creating an environment where teams can sense, interpret, and act with speed, autonomy, and purpose,” said Govind. As the learning journey of Conduent pivots from stabilization to growth, he shared that the leaders need to do two key things in the current scenario: be human-centric and be digitally fluent. Similarly, Srilatha highlighted a fundamental shift happening among the leaders: “Leaders today must lead with both compassion and courage while taking tough decisions with kindness.” She also underlined the rising importance of the three Rs in modern leadership: Reskilling, resilience, and rethinking. ... Govind pointed to something deceptively simple: acting on feedback. “We didn’t just collect feedback, we analyzed sentiment, made changes, and closed the loop. That made stakeholders feel heard.” This approach led Conduent to experiment with program duration, where they went from 12 to 8 to 6 months.’ “Learning is a continuum, not a one-off event,” Govind added. ... Leadership development is no longer optional or one-size-fits-all. It’s a business imperative—designed around human needs and powered by digital fluency.


The CISO’s 5-step guide to securing AI operations

As AI applications extend to third parties, CISOs will need tailored audits of third-party data, AI security controls, supply chain security, and so on. Security leaders must also pay attention to emerging and often changing AI regulations. The EU AI Act is the most comprehensive to date, emphasizing safety, transparency, non-discrimination, and environmental friendliness. Others, such as the Colorado Artificial Intelligence Act (CAIA), may change rapidly as consumer reaction, enterprise experience, and legal case law evolves. CISOs should anticipate other state, federal, regional, and industry regulations. ... Established secure software development lifecycles should be amended to cover things such as AI threat modeling, data handling, API security, etc. ... End user training should include acceptable use, data handling, misinformation, and deepfake training. Human risk management (HRM) solutions from vendors such as Mimecast may be necessary to keep up with AI threats and customize training to different individuals and roles. ... Simultaneously, security leaders should schedule roadmap meetings with leading security technology partners. Come to these meetings prepared to discuss specific needs rather than sit through pie-in-the-sky PowerPoint presentations. CISOs should also ask vendors directly about how AI will be used for existing technology tuning and optimization. 


State of Open Source Report Reveals Low Confidence in Big Data Management

"Many organizations know what data they are looking for and how they want to process it but lack the in-house expertise to manage the platform itself," said Matthew Weier O'Phinney, Principal Product Manager at Perforce OpenLogic. "This leads to some moving to commercial Big Data solutions, but those that can't afford that option may be forced to rely on less-experienced engineers. In which case, issues with data privacy, inability to scale, and cost overruns could materialize." ... EOL operating system, CentOS Linux, showed surprisingly high usage, with 40% of large enterprises still using it in production. While CentOS usage declined in Europe and North America in the past year, it is still the third most used Linux distribution overall (behind Ubuntu and Debian), and the top distribution in Asia. For teams deploying EOL CentOS, 83% cited security and compliance as their biggest concern around their deployments. ... "Open source is the engine driving innovation in Big Data, AI, and beyond—but adoption alone isn't enough," said Gael Blondelle, Chief Membership Officer of the Eclipse Foundation. "To unlock its full potential, organizations need to invest in their people, establish the right processes, and actively contribute to the long-term sustainability and growth of the technologies they depend on."


Cybercrime goes corporate: A trillion-dollar industry undermining global security

The CaaS market is a booming economy in the shadows, driving annual revenues into billions. While precise figures are elusive due to its illicit nature, reports suggest it's a substantial and growing market. CaaS contributes significantly, and the broader cybersecurity services market is projected to reach hundreds of billions of dollars in the coming years. If measured as a country, cybercrime would already be the world's third-largest economy, with projected annual damages reaching USD 10.5 trillion by 2025, as per some cybersecurity ventures. This growth is fueled by the same principles that drive legitimate businesses: specialisation, efficiency, and accessibility. CaaS platforms function much like dark online marketplaces. They offer pre-made hacking kits, phishing templates, and even access to already compromised computer networks. These services significantly lower the entry barrier for aspiring criminals. ... Enterprises must recognise that attackers often hit multiple systems simultaneously—computers, user identities, and cloud environments. This creates significant "noise" if security tools operate in isolation. Relying on many disparate security products makes it difficult to gain a holistic view and understand that seemingly separate incidents are often part of a single, coordinated attack.


Modern apps broke observability. Here’s how we fix it.

For developers, figuring out where things went wrong is difficult. In a survey looking at the biggest challenges to observability, 58% of developers said that identifying blind spots is a top concern. Stack traces may help, but they rarely provide enough context to diagnose issues quickly; developers chase down screenshots, reproduce problems, and piece together clues manually using the metric and log data from APM tools; a bug that could take 30 minutes to fix ends up consuming days or weeks. Meanwhile, telemetry data accumulates in massive volumes—expensive to store and hard to interpret. Without tools to turn data into insight, you’re left with three problems: high bills, burnout, and time wasted fixing bugs—bugs that don’t have a major impact on core business functions or drive revenue when increasing developer efficiency is a top strategic goal at organizations. ... More than anything, we need a cultural change. Observability must be built into products from the start. That means thinking early about how we’ll track adoption, usage, and outcomes—not just deliver features. Too often, teams ship functionality only to find no one is using it. Observability should show whether users ever saw the feature, where they dropped off, or what got in the way. That kind of visibility doesn’t come from backend logs alone.

Daily Tech Digest - October 22, 2024

GenAI surges in law firms: Will it spell the end of the billable hour?

All areas of law will use genAI, according to Joshua Lenon, Clio’s Lawyer in Residence. That’s because AI content generation and task automation tools can help the business side and practice efforts of law firms. However, areas that have repetitive workflows and large document volumes – like civil litigation – will adopt genAI e-discovery tools more quickly. Practice areas that charge exclusively flat fees – like traffic offenses and immigration – are already the largest adopters of genAi. ... Nearly three-quarters of a law firm’s hourly billable tasks are exposed to AI automation, with 81% of legal secretaries’ and administrative assistants’ tasks being automatable, compared to 57% of lawyers’ tasks, according a survey of both legal professionals (1,028) and another adults (1,003) in the U.S. general population, by Clio. Hourly billing has long been the preference of many professionals, from lawyers to consultants, but AI adoption is upending this model where clients are charged for the time spent on services. ... People have been talking about the demise of the billable hour for about 30 years “and nothing’s killed it yet,” said Ryan O’Leary, research director for privacy and legal technology at IDC. “But if anything will, it’ll be this.”


IT security and government services: Balancing transparency and security

For cyber defenses, government IT leaders should invest in website hosting services with Secure Sockets Layer (SSL) encryption, and further enhancing security with HTTP Strict Transport Security (HSTS). These measures ensure that all data exchanged via government sites is encrypted, protecting resident self-service features such as online voter registration, permit submissions, utility bill payments, and more. By enforcing HSTS, websites are also protected from protocol downgrade attacks and cookie hijacking, ensuring that all connections remain secure, and reducing the risk of data interception. Other marks of a reliable website hosting solution provider include DDoS mitigation coverage and reliability around regular software patching and updates. For all digital partners, it’s essential to consider third-party risk. Some of the most valuable information residents should be able to access – meeting minutes, agendas, and other documents pertaining to local governing decisions – are hosted by document management vendors. To ensure this access is secure, each vendor must be vetted on its security capabilities, so that critical data is always protected, and hackers are not able to prevent access for residents or laterally move further into government networks.


Software buying trends are changing: From SaaS to outcome as a service

The last decade saw the rise of Software-as-a-Service (SaaS), transforming how businesses approached software deployment. This decade belongs to Outcomes-as-a-Service. CIOs are no longer interested in building large internal developer teams or experimenting with different platforms. They seek business impacting solutions with tangible outcomes that drive business success. Business teams need solutions that deliver results today, not tomorrow. ... AI-powered hyperautomation combines generative AI, BPM, RPA, integrations, analytics, and app-building to drive end-to-end outcomes. In today’s dynamic business environment, an integrated approach is essential. Siloed automation with narrowly focused platforms is no longer sufficient. ... AI-platforms excel in delivering outcomes at speed and scale. Leveraging automation expertise, they ensure outcomes linked to growth, efficiency, and compliance. The platform implements continuous cycles of process mining, implementation, adoption, and solution refinement until desired objectives are met.They also offer a comprehensive solution, managing everything from process definition and refinement to platform implementation, support, application development, and adoption. 


How Retailers Are Using Tech for Competitive Advantage

“While technology can streamline operations, an overreliance on automation without human touch can sometimes backfire,” Peters says. “Consumers still value human interaction, especially in complex support scenarios. It’s crucial for retailers to balance automation with human agents, particularly in areas that require empathy and nuanced decision-making.” ... Companies of all sizes benefit from greater organizational efficiency, and tech has been the fuel powering digital transformation. For example, Lowes uses AR for home improvement shopping while Sephora uses it for virtual make up try-ons. Walmart is stepping up automation in its battle against Amazon. But smaller retailers are benefiting, too. ... “One of our customer’s last large-scale automation took them five years from the time they started the concept to deployment,” Naslund says. “For context, the pandemic, was four and a half years, and the amount of volatility that the supply chain saw over the four years was insane. We saw inventory gluts, inventory shortages, and panic buying. Then you saw a warehouse shortage capacity, everybody's panicking to get warehouses. Then, they suddenly have too much space.”


Why and How IT Leaders Can Embrace the AI Revolution

AI software certainly has some consequences for IT departments. There may be some new types of workflows to manage, new user requests to support, and new application deployments to track. But unless your business is actually building complex AI solutions from scratch — which it probably isn't or shouldn't because sophisticated, mature AI tools and services are available from external vendors, complete with support plans and SLAs — implementing AI is not actually that challenging. That's because most third-party AI solutions boil down to SaaS apps that work just like any other SaaS: The vendor builds, manages, and supports them, with few resources and little effort necessary on the part of customers' IT departments. From the perspective of IT, implementing AI isn't all that different from implementing any other type of software. ... For IT, there are really not any novel data privacy or security risks at stake here. The app ingests financial data, but so do plenty of non-AI applications. IT's responsibility when it comes to managing data security for this type of app boils down to vetting the vendor by reviewing its data management and compliance practices. The fact that the app uses AI doesn't change this process.


Has the time come for integrated network and security platforms?

Interest in platformization is growing among enterprises, asserts Extreme Networks, which recently surveyed 200 CIOs and senior IT leaders for its research, CIO Insights Report: Priorities and Investment Plans in the Era of Platformization. ... A platform that helps organizations transition their network to the cloud to streamline IT efficiency and lower total cost of ownership is important, respondents said. In addition, 55% of respondents emphasized the need to integrate from a broad ecosystem of networking and security offerings, indicating a clear demand for unified platforms, Extreme concluded. ... “The message I got from the survey was that customers are operating in a world where there’s a massive proliferation of products, or applications, and that’s really translating into complexity. Complexity is equal to risk, and that complexity is happening in multiple places,” said Extreme Networks CTO Nabil Bukhari. Complexity is an interesting topic because it changes, Bukhari said. The first Ford cars were basically just an engine with brakes, but they were complicated to start and drive. “Now, if you look at a car, they are like data centers on wheels. But driving and owning them is exponentially easier,” Bukhari said.


How legacy IT systems can hold your business back

While legacy IT systems may still be functional, they can hold a business back from reaching its full potential – especially if market competitors are busy upgrading their own systems. Companies need to carefully evaluate the costs and benefits of keeping legacy systems in place and develop a plan to modernize their IT infrastructure. Investing in a modern data center solution can, over time, improve business agility, security, and your organization’s bottom line. ... This is especially true when it comes to next-generation applications using LLMs and machine learning (ML) for AI-dependent applications. Enterprise servers, storage and networking hardware, and software manufactured before about 2016 were not designed with scaled-up data workloads in mind – especially workloads for genAI, which just started to take off in 2021. This can hinder growth and force companies to invest in additional hardware or software just to maintain their current operations. Legacy systems are also more prone to failures and outages due to aging hardware and software. This downtime disrupts operations and leads to lost revenue, especially for critical business functions. Additionally, data loss from system crashes can be costly to recover from.


Architecture Inversion: Scale by Moving Computation, Not Data

Now why should the rest of us care, blessed as we are with a lack of most of the billions of users TikTok, Google and the likes are burdened with? A number of factors are becoming relevant:ML algorithms are improving and so is local compute capacity, meaning fully scoring items gives a larger boost in quality and ultimately profit than used to be the case. With the advent of vector embeddings, the signals consumed by such algorithms have grown by one to two orders of magnitude, making the network bottleneck more severe. Applying ever more data to solve problems is increasingly cost effective, which means more data needs to be rescored to maintain a constant quality loss. As the consumers of data from such systems move from being mostly humans to mostly LLMs in RAG solutions, it becomes beneficial to deliver larger amounts of scored data faster in more applications than before. ... For these reasons, the scaling tricks of the very biggest players are becoming increasingly relevant for the rest of us, which has led to the current proliferation of architecture inversion, going from traditional two-tier systems where data is looked up from a search engine or database and sent to a stateless compute tier to inserting that compute into the data itself.


The secret to successful digital initiatives is pretty simple, according to Gartner

As with all technologies, seeing results from AI comes down to focusing like a laser beam on the problem at hand: "In my experience, the businesses that start with a real use case and problem are seeing an ROI," Julian LaNeve, chief technology officer at Astronomer, a data platform company, told ZDNET. "They define a well-scoped, impactful problem and use gen AI to solve [it], and it's easy to measure success and ROI. The most successful business cases identify how to solve a problem that the business already cares deeply about and [will] deliver additional value to customers." Technology maturity also makes a difference in success rates. "Previous generations of AI were narrower in scope but have been successful," said Dominic Sartorio, vice president at Denodo, a data management provider. "AI is helping with predictive maintenance of manufactured goods, predicting demand spikes in [the] markets, and finding the optimal routes for logistics, and [has] been successful for many years." Furthermore, according to Gartner, companies that treat their digital initiatives in a collaborative fashion -- between business and IT leaders -- rather than leaving all things digital up to their IT departments are successful with technology. 


Showing AI users diversity in training data can boost perceived fairness and trust

The work investigated whether displaying racial diversity cues—the visual signals on AI interfaces that communicate the racial composition of the training data and the backgrounds of the typically crowd-sourced workers who labeled it—can enhance users' expectations of algorithmic fairness and trust. Their findings were recently published in the journal Human-Computer Interaction. AI training data is often systematically biased in terms of race, gender and other characteristics, according to S. Shyam Sundar, Evan Pugh University Professor and director of the Center for Socially Responsible Artificial Intelligence at Penn State. "Users may not realize that they could be perpetuating biased human decision-making by using certain AI systems," he said. Lead author Cheng "Chris" Chen, assistant professor of communication design at Elon University, who earned her doctorate in mass communications from Penn State, explained that users are often unable to evaluate biases embedded in the AI systems because they don't have information about the training data or the trainers. "This bias presents itself after the user has completed their task, meaning the harm has already been inflicted, so users don't have enough information to decide if they trust the AI before they use it," Chen said



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain

Daily Tech Digest - June 01, 2024

AI Governance: Is There Too Much Focus on Data Leakage?

While data leakage is an issue it’s by no means the only one. GenAI stands apart due to its autonomous nature and its unique ability to create new content from the information it is exposed to, and this introduces a whole host of new problems. Data poisoning, for instance, sees a malicious actor intentionally compromise the data feed of the AI to skew results. This might involve seeding an LLM with examples of deliberately vulnerable code resulting in issues being adopted in new code. Without proper checks and balances in place, this could result in the poisoned data being pulled into organisational codebases via requests from developers. The code could then end up in production application and services which would be vulnerable to a zero-day attack. AI hallucinations, sometimes referred to as confabulations, are another issue. Unlike poisoning, this is the result of the AI’s autonomy which can see it make incorrect deductions based on the data its presented with. GenAI can and does make mistakes, and there are numerous notable examples here too. 


12 Key AI Patterns for Improving Data Quality (DQ)

While there are many solutions and options to improve data quality, AI is a very viable option. AI can significantly enhance data quality in several ways. Here are 12 key use cases or patterns from four categories where AI can help in improving the data quality in business enterprises. ... Firstly, as LLMs such as ChatGPT and Gemini are trained on enormous amounts of public data, it is nearly impossible to validate the accuracy of this massive data set. This often results in hallucinations or factually incorrect responses. No business enterprise would like to be associated with a solution that has even a small probability of giving an incorrect response. Secondly, data today is a valuable business asset for every enterprise. Stringent regulations such as GDPR, HIPAA, and CCPA are forcing companies to protect personal data. Breaches can lead to severe financial penalties and damage to the company’s reputation and brand. Overall, organizations want to protect their data by keeping it private and not sharing it with everyone on the internet. Below are some examples of hallucinations from popular AI platforms.


Experts Warn of Security Risks in Grid Modernization

Experts recommend requiring comprehensive security assessments on all GETs and modern grid components. They say malicious actors and foreign adversaries already possess unauthorized access to many critical infrastructure sectors. The Cybersecurity and Infrastructure Security Agency has steadily released a series of alerts in recent months warning of a Chinese state-sponsored hacking group known as Volt Typhoon. The group is aiming to pre-position itself using "living off the land" techniques on information technology networks "for disruptive or destructive cyber activity against U.S. critical infrastructure in the event of a major crisis or conflict with the United States," according to CISA. "The Volt Typhoon alerts have said the quiet part out loud," said Padraic O'Reilly, chief innovation officer for the risk management platform CyberSaint Security. "The [threat] is in the networks, so new infrastructure must not allow for lateral movement on OT assets." Biden's federal-state grid modernization plan emphasizes the need to "speed up adoption and deployment" of GETs. 


Corporations looking at gen AI as a productivity tool are making a mistake

Taking the time to focus on the bigger picture will set up organizations for more success in the future, Menon said. AI is transformational and requires a comprehensive reevaluation of current business processes, data strategies, technology platforms, and people strategies, Pallath said. “Implementing AI effectively necessitates simplifying and revamping business processes with an AI-first mindset,” Pallath said. “Effective change management and governance are crucial to ensure that the entire organization is prepared for and engaged in this transformation.” What often happens, he said, is that employees worry more about AI’s impact on their jobs, rather than how they can leverage the technology to help them work smarter, thereby hindering the necessary changes in process to make AI successful. Executive leadership and sponsorship are also critical. “AI initiatives need strong leadership support to overcome inertia and gain the necessary resources,” Pallath said. “Without a clear vision from the top, AI projects are more likely to get stalled or diluted.” A dedicated AI team headed by a chief AI officer can help ensure success. 


Why HTML Actions Are Suddenly a JavaScript Trend

Actions in React look a lot like HTML actions, but they also look similar to event handlers like onsubmit, or unclick, Clark said. “Despite the surface-level similarities, though, actions have some important abilities that set them apart from regular event handlers,” he continued. “One such ability is support for progressive enhancement. Form actions in React are interactive before hydration occurs. Believe it or not, this works with all actions, not just actions defined on the server.” If the user interacts with a client action before it is finished hydrating, React will cue the action and replay as soon as it streams it, he said. If the user interacts with a server action, action can immediately trigger a regular browser navigation, without hydration or JavaScript. Actions also can handle asynchronous logic, he said. “React actions have built-in support for UX patterns like optimistic UI and error handling,” he said. “Actions make these complex UX patterns super simple by deeply integrating with React features like suspense and transitions.


Indonesia to Create 'Super Apps' to Run Government Services

The government has entrusted state-owned technology company Perum Peruri, commonly known as Peruri, with developing the new applications, digitizing government services and implementing the government's Electronic-Based Government System, which will run modernized applications and digital portals. ... The company said its rich history of developing high-security solutions makes it the ideal choice to lead the government's digital transformation program. "Peruri presents a fresh visual identity that illustrates how we are able to produce quality services to maintain the authenticity of products, identities and complex digital systems," said President and Director Dwina Septiani Wijaya. "The transformation process we are undergoing does not only focus on business and infrastructure, but we also understand the importance of quality human resources. ... The government's planned integration of government applications could make it easier for IT security teams to manage far fewer applications than before, but could also make the new super applications prime targets for hacking attacks considering the amount of public data they would process.


Within two years, 90% of organizations will suffer a critical tech skills shortage

Among the challenges organizations face when trying to expand the skills of their employees is resistance to training. Employees complain that the courses are too long, the options for learning are too limited, and there isn’t enough alignment between skills and career goals, according to IDC’s survey. ... IT leaders need to employ a variety of strategies to encourage a more effective learning environment within their organization. That includes everything from classroom training to hackathons, hand-on labs, and games, quests, and mini-badges. But fostering a positive learning environment in an organization requires more than just materials, courses, and challenges. Culture change begins at the top, and leaders need to demonstrate why learning matters to the organization. “This can be done by aligning employee goals with business goals, promoting continuous learning throughout the employee’s journey, and creating a rewards program that recognizes process as well as performance,” IDC’s report stated. “It also requires the allocation of adequate time, money, and people resources.”


RIG Model - The Puzzle of Designing Guaranteed Data-Consistent Microservice Systems

The RIG model sets the foundation for the saga design. It is founded in the CAP theorem and the work of Bromose and Laursen. The theoretical work results in a set of microservice categories and rules that the sagaS must comply with if we are to guarantee data consistency. The RIG model divides microservices behavior within a saga into three categories:Guaranteed microservices: Local transactions will always be successful. No business constraints will invalidate the transaction. Reversible microservices: Local transactions can always be undone and successfully rolled back with the help of compensating transactions. Irreversible microservices: Local transactions cannot be undone. ... A reversible microservice must include support for a compensating transaction and be able to handle an incoming "cancel transaction" message. When receiving a "cancel transaction" request, the microservice must "roll back" to the state before the saga. Handling compensating transactions in a reversible microservice must behave as a "Guaranteed" service. 


3 reasons users can’t stop making security mistakes — unless you address them

People are naturally inclined to find the fastest possible route at work, and that often translates into taking shortcuts that compromise security for the sake of convenience. Even tech employees are not immune when, for example, importing libraries from public repositories assuming these are safe, as they continue to be used to distribute malware and steal passwords. To avoid these shortcuts that can threaten systems, CISOs can put automated MFA prompts in place to avoid risks due to compromised passwords and restrict access to services that could put data at risk, including generative AI or downloadable libraries of code. ... Users should use out-of-band communication for verification to deter attacks and scams. Contacting those businesses through a phone number or email previously established as legitimate is a good way to ascertain whether or not the message is authorized by the entity it claims. While CISOs can’t eliminate all human risk, they can significantly reduce incidents and promote a cyber-aware culture with a strategy that addresses the psychological drivers behind poor decisions.


Elevating Defense Precision With AI-Powered Threat Triage in Proactive Dynamic Security

AI-powered threat triage operates on the principle of predictive analytics, leveraging machine learning algorithms to sift through massive datasets and identify patterns indicative of potential security threats. By continuously analyzing historical data and monitoring network activity, AI systems can detect subtle anomalies and deviations from normal behavior that may signify an impending attack. Moreover, AI algorithms can adapt and learn from new data, enabling them to evolve and improve their threat detection capabilities over time. In the perpetual battle against an ever-expanding array of cyber threats, organizations are increasingly turning to innovative technologies to bolster their defenses and stay ahead of potential attacks. ... At the forefront of this technological revolution is the integration of Artificial Intelligence (AI) into threat triage processes, and the intricate dynamics of advanced algorithms and machine learning capabilities ushering in a new era of proactive defenses that explores the transformation of traditional cybersecurity strategies.



Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman

Daily Tech Digest - November 02, 2023

How Banks Can Turn Risk Into Reward Through Data Governance

To understand why data governance is critical for banks, we must understand the underlying challenges facing financial services organizations as they modernize. Rolling out new cloud applications or Internet of Things (IoT) devices into an environment where legacy on-premises systems are already in place means more data silos and data sets to manage. Often, this results in data volumes, variety, and velocity increasing much too quickly for banks. This gives rise to IT complexity—driven by technical debt or the reliance on systems cobbled together and one-off connections. Not only that, it also raises the specter of 'shadow IT' as employees look for workarounds to friction in executing tasks. This can create difficulties for banks trying to identify and manage their data assets in a consistent, enterprise-wide way that is aligned with business strategy. Ultimately, barely controlled data leads to errant financial reporting, data privacy breaches, and non-compliance with consumer data regulations. Failing to counter these risks can lead to fines, hurt brand image, and trigger lost sales. 


Key Considerations for Developing Organizational Generative AI Policies

It's crucial to ensure that all relevant stakeholders have a voice in the process, both to make the policy comprehensive and actionable and to ensure adherence to legal and ethical standards. The breadth and depth of stakeholders involved will depend on the organizational context, such as, regulatory/legal requirements, the scope of AI usage and the potential risks associated (e.g., ethics, bias, misinformation). Stakeholders offer technical expertise, ensure ethical alignment, provide legal compliance checks, offer practical operational feedback, collaboratively assess risks, and jointly define and enforce guiding principles for AI use within the organization. Key stakeholders—ranging from executive leadership, legal teams and technical experts to communication teams, risk management/compliance and business group representatives—play crucial roles in shaping, refining and implementing the policy. Their contributions ensure legal compliance, technical feasibility and alignment with business and societal values.x


CIOs sharpen cloud cost strategies — just as gen AI spikes loom

One key skill CIOs are honing to lower costs is their ability to negotiate with cloud providers, said one CIO who declined to be named. “People better understand the charges, and [they] better negotiate costs. After being in cloud and leveraging it better, we are able to manage compute and storage better ourselves,” said the CIO, who notes that vendors are not cutting costs on licenses or capacity but are offering more guidance and tools. “After some time, people have understood the storage needs better based on usage and preventing data extract fees.” Thomas Phelps, CIO and SVP of corporate strategy at Laserfiche, says cloud contracts typically include several “gotchas” that IT leaders and procurement chiefs should be aware of, and he stresses the importance of studying terms of use before signing. ... CIOs may also fall into the trap of misunderstanding product mixes and the downside of auto-renewals, he adds. “I often ask vendors to walk me through their product quote and explain what each product SKU or line item is, such as the cost for an application with the microservices and containerization,” Phelps says. 


Misdirection for a Price: Malicious Link-Shortening Services

Security researchers gave the service the codename "Prolific Puma." They discovered it by identifying patterns in links being used by some scammers and phishers that appeared to trace to a common source. The service appears to be have active since at least 2020 and regularly is used to route victims to malicious domains, sometimes first via other link-shortening service URLs. "Prolific Puma is not the only illicit link shortening service that we have discovered, but it is the largest and the most dynamic," said Renee Burton, senior director of threat intelligence for Infoblox, in a new report on the cybercrime service. "We have not found any legitimate content served through their shortener." Infoblox, a Santa Clara, California-based IT automation and security company, published a list of 60 URLs it has tied to Prolific Puma's attacks. The URLS employ such domains as hygmi.com, yyds.is, 0cq.us, 4cu.us and regz.information. Infoblox said many domains registered by the group are parked for several weeks while being used, since many reputation-based security defenses will treat freshly registered domains as more likely to be malicious.


DNS security poses problems for enterprise IT

EMA asked research participants to identify the DNS security challenges that cause them the most pain. The top response (28% of all respondents) is DNS hijacking. Also known as DNS redirection, this process involves intercepting DNS queries from client devices so that connection attempts go to the wrong IP address. Hackers often achieve this buy infecting clients with malware so that queries go to a rogue DNS server, or they hack a legitimate DNS server and hijacks queries as more massive scale. The latter method can have a large blast radius, making it critical for enterprises to protect DNS infrastructure from hackers. The second most concerning DNS security issue is DNS tunneling and exfiltration (20%). Hackers typically exploit this issue once they have already penetrated a network. DNS tunneling is used to evade detection while extracting data from a compromised. Hackers hide extracted data in outgoing DNS queries. Thus, it’s important for security monitoring tools to closely watch DNS traffic for anomalies, like abnormally large packet sizes. The third most pressing security concern is a DNS amplification attack (20%). 


Data governance that works

Once we've found our targeted business initiatives and the data is ready to meet the needs of those initiatives, there are three major governance pillars we want to address for that data: understand, curate, and protect. First, we want to understand the data. That means having a catalog of data that we can analyze and explain. We need to be able to profile the data, to look for anomalies, to understand the lineage of that data, and so on. We also want to curate the data, or make it ready for our particular initiatives. We want to be able to manage the quality of the data, integrate it from a variety of sources across domains, and so on. And we want to protect the data, making sure we comply with regulations and manage the life cycle of the data as it ages. More importantly, we need to enable the right people to get to the right data when they need it. AWS has tools, including Amazon DataZone and AWS Glue, to help companies do all of this. It's really tempting to attack these issues one by one and to support each individually. But in each pillar, there are so many possible actions that we can take. This is why it's better to work backwards from business initiatives.


EU digital ID reforms should be ‘actively resisted’, say experts

The group’s concerns over the amendments largely centre on Article 45 of the reformed eIDAS, where it says the text “radically expands the ability of governments to surveil both their own citizens and residents across the EU by providing them with the technical means to intercept encrypted web traffic, as well as undermining the existing oversight mechanisms relied on by European citizens”. “This clause came as a surprise because it wasn’t about governing identities and legally binding contracts, it was about web browsers, and that was what triggered our concern,” explained Murdoch. ... All websites today are authenticated by root certificates controlled by certificate authorities, which assure the user that the cryptographic keys used to authenticate the website content belong to the website. The certificate owner can intercept a user’s web traffic by replacing these cryptographic keys with ones they control, even if the website has chosen to use a different certificate authority with a different certificate. There are multiple cases of this mechanism having been abused in reality, and legislation to govern certificate authorities does exist and, by and large, has worked well.


The key to success is to think beyond the obvious, to innovate and look for solutions

AI systems, including machine learning models, make critical decisions and recommendations. Ensuring the accuracy and reliability of these AI models is paramount. AI heavily relies on data and ensuring data quality, integrity, and consistency is a crucial task. Data pre-processing and validation are necessary steps to make AI models work effectively. Integration of software testing in the software development life cycle helps identify and rectify issues that could lead to incorrect predictions or decisions, minimizing the risks associated with AI tools. AI models are susceptible to adversarial attacks and robust security testing helps identify vulnerabilities and weaknesses in AI systems, protecting them from cyber threats and ensuring the safety of automated processes. Testing is not a one-time effort; it’s an ongoing process. Regular testing and monitoring are necessary to identify issues that may arise as AI models and automated systems evolve. High-quality, well-tested AI-driven automation can provide a competitive advantage.


We built a ‘brain’ from tiny silver wires.

We are working on a completely new approach to “machine intelligence”. Instead of using artificial neural network software, we have developed a physical neural network in hardware that operates much more efficiently. ... Using nanotechnology, we made networks of silver nanowires about one thousandth the width of a human hair. These nanowires naturally form a random network, much like the pile of sticks in a game of pick-up sticks. The nanowires’ network structure looks a lot like the network of neurons in our brains. Our research is part of a field called neuromorphic computing, which aims to emulate the brain-like functionality of neurons and synapses in hardware. Our nanowire networks display brain-like behaviours in response to electrical signals. External electrical signals cause changes in how electricity is transmitted at the points where nanowires intersect, which is similar to how biological synapses work. There can be tens of thousands of synapse-like intersections in a typical nanowire network, which means the network can efficiently process and transmit information carried by electrical signals.


Why public/private cooperation is the best bet to protect people on the internet

Neither the FTC nor the SEC was empowered by Congress with responsibility for cyberspace, and both have relied on pre-existing authorities related to corporate representations to bring actions against individuals who did not have corporate duties managing legal or external communications. They are using the tools at their disposal to change expectations, even if it means bringing a bazooka to a knife fight. These cases make CISOs worried that in addition to being technical experts they also need to personally become experts on data breach disclosure laws and experts on SEC reporting requirements rather than trusting their peers in the legal and communications departments of their organizations. What we need is a real partnership between the public and the private sector, clear rules and expectations for IT professionals and law enforcement, and an executive branch that will attempt regulation through rulemaking rather than through ugly and costly enforcement actions that target IT professionals for doing their jobs and further deepens the adversarial public-private divide.



Quote for the day:

"Leadership is working with goals and vision; management is working with objectives." -- Russel Honore

Daily Tech Digest - September 18, 2023

The ‘Great Retraining’: IT upskills for the future

As the technology ecosystem expands, Servier Pharmaceuticals’ Yunger believes cultivating hard-to-find skill sets from within is instrumental to future-proofing the IT organization. The company, a Google Cloud Platform shop, came face-to-face with that reality when it became difficult to find specialists, shifting its emphasis to growing its own talent. Yunger takes a talent lifecycle management approach that considers the firm’s three- to five-year strategy, aligns it to the requisite IT skills, and then matches the plan to individualized development and training programs. “We provide our vision of the future to our existing team and give them an opportunity to self-select into those paths to meet our future needs,” he explains. “The better our long-term vision, the more time we have to give our team the chance to learn and grow.” The University of California, Riverside, which is undertaking a similar practice to nurture IT talent from within, makes a concerted effort to start any large-scale reskilling initiative with those most willing to embrace change. 


The double-edged sword of AI in financial regulatory compliance

As fraudsters obtain more personal data and create more believable fake IDs, the accuracy of AI models improves, leading to more successful scams. The ease of creating believable identities enables fraudsters to scale identity-related scams with high success rates. Another key area where generative AI models can be employed by criminals is during various stages of the money laundering process, making detection and prevention more challenging. For instance, fake companies can be created to facilitate fund blending, while AI can simplify the generation of fake invoices and transaction records, making them more convincing. Furthermore, by bypassing KYC/CDD checks, it’s possible to create offshore accounts that hide the beneficial owners behind money laundering schemes. Generating false financial statements becomes effortless and AI can identify loopholes in legislation to facilitate cross-jurisdictional money movements.


Growing With AI Not Against It: How To Stay One Step Ahead

The key to effectively integrating AI into your business lies in proactive engagement. Rather than being passive recipients of technological changes, businesses should take an active role in understanding AI's potential applications. Reflecting on prominent companies such as Kodak and Nokia, which once dominated their respective industries, but ultimately faltered due to their reluctance to adopt technological advancements, underscores the importance of embracing AI as a transformative force. Consider Netflix's evolution from mailing in DVDs to streaming and their use of AI algorithms to recommend personalized content to users. ... In the face of advancing AI technology, the role of leaders is not merely to keep up but to set the pace. By actively engaging with AI, embracing it as a partner, learning from mistakes, and strategically adapting our approach, we position ourselves to harness its potential to foster innovation and enable us to navigate the future with confidence.


Metaverse and Telemedicine: Creating a Seamless Virtual Healthcare Experience

Firstly, the convergence of new core technologies like blockchain, digital twins, convergence, and virtual hospitals into the Metaverse will empower clinicians to offer more integrated treatment packages and programs. Secondly, using AR and VR technologies will enhance patient experiences and outcomes. Another benefit of the Metaverse for telemedicine is that it will facilitate collaboration among healthcare professionals. The ability to share information between healthcare professionals immediately will enable quicker pinpointing of the causes of illnesses. Moreover, the Metaverse will offer new opportunities to students and trainees to examine the human body in a safe, virtual reality educational environment. Surgeons are already using VR, AR, and AI technology to perform minimally-invasive surgeries, and the Metaverse opens up new frontiers in this area. Surgeons will be able to get a complete 360-degree view of a patient’s body, allowing them to better perform complex procedures using these immersive technologies.


Adaptive Security: A Dynamic Defense for a Digital World

Adaptive security systems employ continuous monitoring to gain real-time insights into an organization's network, applications, and endpoints. This continuous data collection allows for the rapid detection of abnormal behavior and potential threats. ... Understanding the context of an activity is crucial in adaptive security. Systems analyze not only the behavior of individual elements but also the relationships between them. This context-awareness helps in distinguishing between normal and malicious activities, reducing false positives. ... Adaptive security leverages machine learning and artificial intelligence (AI) algorithms to process vast amounts of data and identify patterns indicative of threats. These algorithms can adapt and evolve their detection capabilities based on new information and emerging attack vectors. ... Automation is a core element of adaptive security. When a potential threat is detected, adaptive security systems can automatically respond by isolating affected systems, blocking suspicious traffic, or alerting security teams for further investigation. 


The Power Duo: How Platforms and Governance Can Shape Generative AI

As you catalog the tools in your organization, consider where most of your development takes place. Is it happening solely in notebooks requiring code knowledge? Are you versioning your work through a tool like Github, which is often confusing to a non-coding audience? How is documentation handled and maintained over time? Oftentimes, business stakeholders and consumers of the model are locked out of the development process because there is a lack of technical understanding and documentation. When work happens in a silo, hand-offs between teams can be inefficient and result in knowledge loss or even operational roadblocks. This leads to results that are not trusted, oreven worse, adoption of the outputs. Many organizations wait too long before leveraging business experts during the preparation and build stages of the AI lifecycle. ...  This might be because only some of the glued together infrastructure is understood by the business unit, the hand off between teams is clunky and poorly documented, or the steps aren’t clearly laid out in an understandable manner.


How India is driving tech developments at G20

While there were no major technology-related announcements, a lot of indirect spillovers can be found in discussions on artificial intelligence (AI) and crypto regulations, taking a human-centric approach to technology, digitisation of trade documents and tech-enabled development of agriculture and education. As a run-up, there were recommendations and policy actions for the business sector, including the Startup20 initiative to support startup companies and the focus on digital public infrastructure (DPI). The summit had also cast the spotlight on climate change commitments, clean energy, and sustainability development goals. Pradeep Gupta, founder of think tank Security and Policy Initiatives, noted that the emphasis on climate change initiatives at G20 would require IT to play a role in areas like equipment, data management and analytics. “Carbon credits cannot function without good AI and data technology in place,” he said. “DPI will also be a big lever for the industry.” V K Sridhar ... agreed that IT will be instrumental in driving all the climate change agreements that emerged at this G20 – both from a technology and administrative point of view. 


Executive Q&A: Developing Data-Focused Professionals

Many universities have been caught unprepared for the exploding demands in AI skills. Most educational programs are traditional (four years) and do not necessarily give students the specialized in-time skills they need for these jobs. Deloitte had an interesting article about “AI whisperers” as the job of the future, referring to enterprises’ need for employees who deeply understand machine learning algorithms, data structures, and programming languages. Such jobs are already being advertised. An institute of higher education needs to be agile enough to create concentrations and certificates that quickly provide students and existing employees with just-in-time skills. ... There is inertia, and you can argue it is by design: universities are most comfortable with a traditional four-year education. They know how to do that, and education boards that approve these programs are also comfortable with that format. However, a four-year education does not speak to all students or speak to their needs and where they are in life.


How to Become a Database Administrator

Capacity planning is a core responsibility of database administrators. Capacity planning is about estimating what resources will be needed – and available – in the future. These resources include computer hardware, software, storage, and connection infrastructure. Fortunately, planning for infrastructure-as-a-service (IaaS) is quite similar to planning for on-premise. The basic difference in planning is the additional flexibility offered by the cloud. This flexibility allows DBAs to plan for the business’s immediate needs instead of planning for needs three to four years in advance. DBAs can also make use of the cloud’s ability to quickly scale up or down to meet the client’s demands. ... The DBA must be consciously aware of the business’s changing demands and the tools offered in the various clouds. Organizing the business in preparation for surge events – such as Black Friday or the start of school in September – and using the on-demand scalability available in cloud platforms is a primary responsibility of the modern DBA. Anticipating and responding to cyclical demands or major events makes the organization much more efficient.


SSE vs SASE: What You Need to Know

The Security Service Edge (SSE) framework was also coined by Gartner, but several years later in 2021. The SSE framework retains most of the core elements of SASE. The key difference is that SSE is designed for IT environments where SD-WAN is not required. SSE fits well for networks that do not have multiple paths to reach destinations without a need for application-based routing decisions. SSE is responsible for secure web, cloud services, and application access. Some of the top business case scenarios in which SSE works best is VPN replacement for remote employees. ... Typically, those considering SSE want a purely cloud-based security platform that provides a range of security functions at the edge of the network. As with SASE, leading networking and security vendors also have SSE options. However, the cloud-native nature of SSE means it is often marketed as a single platform that can be easily deployed, managed, and scaled. For this reason, SSE will likely gain traction at organizations looking to simplify and scale security for remote workers and transition to cloud-native environments.



Quote for the day:

"Everything you want is on the other side of fear." -- Jack Canfield