Showing posts with label outsourcing. Show all posts
Showing posts with label outsourcing. Show all posts

Daily Tech Digest - January 09, 2025

It’s remarkably easy to inject new medical misinformation into LLMs

By injecting specific information into this training set, it's possible to get the resulting LLM to treat that information as a fact when it's put to use. This can be used for biasing the answers returned. This doesn't even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, "a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web." ... rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term "incidental" data poisoning due to "existing widespread online misinformation." But a lot of that "incidental" information was generally produced intentionally, as part of a medical scam or to further a political agenda. ... Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence.


CIOs are rethinking how they use public cloud services. Here’s why.

Where are those workloads going? “There’s a renewed focus on on-premises, on-premises private cloud, or hosted private cloud versus public cloud, especially as data-heavy workloads such as generative AI have started to push cloud spend up astronomically,” adds Woo. “By moving applications back on premises, or using on-premises or hosted private cloud services, CIOs can avoid multi-tenancy while ensuring data privacy.” That’s one reason why Forrester predicts four out of five so called cloud leaders will increase their investments in private cloud by 20% this year. That said, 2025 is not just about repatriation. “Private cloud investment is increasing due to gen AI, costs, sovereignty issues, and performance requirements, but public cloud investment is also increasing because of more adoption, generative AI services, lower infrastructure footprint, access to new infrastructure, and so on,” Woo says. ... Woo adds that public cloud is costly for workloads that are data-heavy because organizations are charged both for data stored and data transferred between availability zones (AZ), regions, and clouds. Vendors also charge egress fees for data leaving as well as data entering a given AZ. “So for transfers between AZs, you essentially get charged twice, and those hidden transfer fees can really rack up,” she says. 


What CISOs Think About GenAI

“As a [CISO], I view this technology as presenting more risks than benefits without proper safeguards,” says Harold Rivas, CISO at global cybersecurity company Trellix. “Several companies have poorly adopted the technology in the hopes of promoting their products as innovative, but the technology itself has continued to impress me with its staggeringly rapid evolution.” However, hallucinations can get in the way. Rivas recommends conducting experiments in controlled environments and implementing guardrails for GenAI adoption. Without them, companies can fall victim to high-profile cyber incidents like they did when first adopting cloud. Dev Nag, CEO of support automation company QueryPal, says he had initial, well-founded concerns around data privacy and control, but the landscape has matured significantly in the past year. “The emergence of edge AI solutions, on-device inference capabilities, and private LLM deployments has fundamentally changed our risk calculation. Where we once had to choose between functionality and data privacy, we can now deploy models that never send sensitive data outside our control boundary,” says Nag. “We're running quantized open-source models within our own infrastructure, which gives us both predictable performance and complete data sovereignty.”


Scaling RAG with RAGOps and agents

To maximize their effectiveness, LLMs that use RAG also need to be connected to sources from which departments wish to pull data – think customer service platforms, content management systems and HR systems, etc. Such integrations require significant technical expertise, including experience with mapping data and managing APIs. Also, as RAG models are deployed at scale they can consume significant computational resources and generate large amounts of data. This requires the right infrastructure as well as the experience to deploy it, as well as the ability to manage data it supports across large organizations. One approach to mainstreaming RAG that has AI experts buzzing is RAGOps, a methodology that helps automate RAG workflows, models and interfaces in a way that ensures consistency while reducing complexity. RAGOps enables data scientists and engineers to automate data ingestion and model training, as well as inferencing. It also addresses the scalability stumbling block by providing mechanisms for load balancing and distributed computing across the infrastructure stack. Monitoring and analytics are executed throughout every stage of RAG pipelines to help continuously refine and improve models and operations.


Navigating Third-Party Risk in Procurement Outsourcing

Shockingly, only 57% of organisations have enterprise-wide agreements that clearly define which services can or cannot be outsourced. This glaring gap highlights the urgent need to create strong frameworks – not just for external agreements, but also for intragroup arrangements. Internal agreements, though frequently overlooked, demand the same level of attention when it comes to governance and control. Without these solid frameworks, companies are leaving themselves exposed to risks that could have been mitigated with just a little more attention to detail. Ongoing monitoring is also crucial to TPRM; organisations must actively leverage audit rights, access provisions and outcome-focused evaluations. This means assessing operational and concentration risks through severe yet plausible scenarios, ensuring they’re prepared for the worst-case while staying vigilant in everyday operations. ... As the complexity of third-party risk grows, so too does the role of AI and automation. The days of relying on spreadsheets and homegrown databases are long gone. Ed’s thoughts on this topic are unequivocal: “AI and automation are critical as third-party risk becomes increasingly complex. Significant work is required for initial risk assessments, pre-contract due diligence, post-contract monitoring, SLA reviews and offboarding.”


Five Ways Your Platform Engineering Journey Can Derail

Chernev’s first pitfall is when a company tries to start platform engineering by only changing the name of its current development practices, without doing the real work. “Simply rebranding an existing infrastructure or DevOps or SRE practice over to platform engineering without really accounting for evolving the culture within and outside the team to be product-oriented or focused” is a huge mistake ... Another major pitfall, he said, is not having and maintaining product backlogs — prioritized lists of work for the development team — that are directly targeting your developers. “For the groups who have backlogs, they are usually technology-oriented,” he said. “That misalignment in thinking across planning and missing feedback loops is unlikely to move progress forward within the organization. That ultimately leads the initiative to fail to deliver business value. Instead, they should be developer-centric,” said Chernev. ... This is another important point, said Chernev — companies that do not clearly articulate the value-add of their platform engineering charter to both technical and non-technical stakeholders inside their operations will not fully be able to reap the benefits of the platform’s use across the business.


Building generative AI applications is too hard, developers say

Given the number of tools they need to do their job, it’s no surprise that developers are loath to spend a lot of time adding another to their arsenal. Two thirds of them are only willing to invest two hours or less in learning a new AI development tool, with a further 22% allocating three to five hours, and only 11% giving more than five hours to the task. And on the whole, they don’t tend to explore new tools very often — only 21% said they check out new tools monthly, while 78% do so once every one to six months, and the remaining 2% rarely or never. The survey found that they tend to look at around six new tools each time. ... The survey highlights the fact that, while AI and generative AI are becoming increasingly important to businesses, the tools and techniques require to develop them are not keeping up. “Our survey results shed light on what we can do to help address the complexity of AI development, as well as some tools that are already helping,” Gunnar noted. “First, given the pace of change in the generative AI landscape, we know that developers crave tools that are easy to master.” And, she added, “when it comes to developer productivity, the survey found widespread adoption and significant time savings from the use of AI-powered coding tools.”


AI infrastructure – The value creation battleground

Scaling AI infrastructure isn’t just about adding more GPUs or building larger data centers – it’s about solving fundamental bottlenecks in power, latency, and reliability while rethinking how intelligence is deployed. AI mega clusters are engineering marvels – data centers capable of housing hundreds of thousands of GPUs and consuming gigawatts of power. These clusters are optimized for machine learning workloads with advanced cooling systems and networking architectures designed for reliability at scale. Consider Microsoft’s Arizona facility for OpenAI: with plans to scale up to 1.5 gigawatts across multiple sites, it demonstrates how these clusters are not just technical achievements but strategic assets. By decentralizing compute across multiple data centers connected via high-speed networks, companies like Google are pioneering asynchronous training methods to overcome physical limitations such as power delivery and network bandwidth. Scaling AI is an energy challenge. AI workloads already account for a growing share of global data center power demand, which is projected to double by 2026. This creates immense pressure on energy grids and raises urgent questions about sustainability.


4 Leadership Strategies For Managing Teams In The Metaverse

Leaders must develop new skills and adopt innovative strategies to thrive in the metaverse. Here are some key approaches:Invest in digital literacy—Leaders must become fluent in the tools and technologies that power the metaverse. This includes understanding VR/AR platforms, blockchain applications and collaborative software such as Slack, Trello and Figma. Emphasize inclusivity—The metaverse has the potential to democratize access to opportunities, but only if it’s designed with inclusivity in mind. Leaders should ensure that virtual spaces are accessible to employees of all abilities and backgrounds. This might include providing hardware like VR headsets or ensuring platforms support diverse communication styles. Create rituals for connection—Leaders can foster connection through virtual rituals and gatherings in the absence of physical offices. These activities, from weekly team check-ins to informal virtual “watercooler” chats, help build camaraderie and maintain a sense of community. Focus on well-being—Effective leaders prioritize employee well-being by setting clear boundaries, encouraging breaks and supporting mental health.


How AI will shape work in 2025 — and what companies should do now

“The future workforce will likely collaborate more closely with AI tools. For example, marketers are already using AI to create more personalized content, and coders are leveraging AI-powered code copilots. The workforce will need to adapt to working alongside AI, figuring out how to make the most of human strengths and AI’s capabilities. “AI can also be a brainstorming partner for professionals, enhancing creativity by generating new ideas and providing insights from vast datasets. Human roles will increasingly focus on strategic thinking, decision-making, and emotional intelligence. ... “Companies should focus on long-term strategy, quality data, clear objectives, and careful integration into existing systems. Start small, scale gradually, and build a dedicated team to implement, manage, and optimize AI solutions. It’s also important to invest in employee training to ensure the workforce is prepared to use AI systems effectively. “Business leaders also need to understand how their data is organized and scattered across the business. It may take time to reorganize existing data silos and pinpoint the priority datasets. To create or effectively implement well-trained models, businesses need to ensure their data is organized and prioritized correctly.



Quote for the day:

"The world is starving for original and decisive leadership." -- Bryant McGill

Daily Tech Digest - December 08, 2024

Here’s the one thing you should never outsource to an AI model

One of the biggest dangers in letting AI take the reins of your product ideation process is that AI processes content — be it designs, solutions or technical configurations — in ways that lead to convergence rather than divergence. Given the overlapping bases of training data, AI-driven R&D will result in homogenized products across the market. Yes, different flavors of the same concept, but still the same concept. Imagine this: Four of your competitors implement gen AI systems to design their phones’ user interfaces (UIs). Each system is trained on more or less the same corpus of information — data scraped from the web about consumer preferences, existing designs, bestseller products and so on. What do all those AI systems produce? Variations of a similar result. What you’ll see develop over time is a disturbing visual and conceptual cohesion where rival products start mirroring one another. ... In platforms like ArtStation, many artists have raised concerns regarding the influx of AI-produced content that, instead of showing unique human creativity, feels like recycled aesthetics remixing popular cultural references, broad visual tropes and styles. This is not the cutting-edge innovation you want powering your R&D engine.


How much capacity is in aging data centers?

Individual data centers have considerable differences between them, and one of the most critical is their size. With this weighting factor, the average moves — but not by much. The “average megawatt” is 10.2 years old. Whereas older data centers (10-plus years) represent 48 percent of the survey sample, they contain 38 percent of the total IT capacity — still a large minority. Interestingly, a more dramatic shift occurs within the population of data centers that have been operating for less than 10 years — well within the typical design lifespan. By facility count alone, there is an even split between the data centers that are one to five years old and those that have been in operation for six to ten years. But when measuring in megawatts, the newest data centers hold significantly more capacity (38 percent) than those with six to ten years of service. This is intuitive; in the past five years, some data center projects have reached unprecedented sizes. Very recent builds are overshadowing the capacity of data centers that are only slightly older, even though the designs are not dramatically different. However, the weighted figures above suggest that even this massive build-out has not yet overcome the moderating influence of much older, potentially less efficient facilities.


Generative AI is making traditional ways to measure business success obsolete

Often touted as the “iron triangle” from the perspective of operational efficiency, this equation implies that, in order to attain a degree of quality, firms must balance cost with the time spent to achieve that level of quality. ... AI has upended this thinking, as firms can now achieve both speed and accuracy at the same time by leveraging AI. This can enhance productivity and drive innovation without losing out on quality. Likewise, through generative AI, smaller companies with fewer resources are able to rub shoulders and compete with larger firms using AI-powered tools. They can do this by streamlining operations, creating cost-effective marketing content and delivering personalised customer experiences. This can make existing businesses more efficient, competitive and creative. It can also lower the barriers to entry into markets for prospective small and medium-sized business owners. ... The UK government’s recent autumn budget included a number of tax rises that will hit businesses, especially some small and medium-sized enterprises (SMEs) that don’t have the financial buffers to weather severe economic challenges. Generative AI has reconfigured the Cost x Time = Quality formula and has enabled firms to do things both quickly and accurately without a trade off.


UK Cyber Risks Are ‘Widely Underestimated,’ Warns Country’s Security Chief

“What has struck me more forcefully than anything else since taking the helm at the NCSC is the clearly widening gap between the exposure and threat we face, and the defences that are in place to protect us,” he said. “And what is equally clear to me is that we all need to increase the pace we are working at to keep ahead of our adversaries.”  ... Horne added that the guidance and frameworks drawn up by the NCSC are not widely used. Ultimately, businesses need to change their perspective on cyber security from a “necessary evil” or “compliance function” to “an integral part of achieving their purpose.” ... “The defence and resilience of critical infrastructure, supply chains, the public sector and our wider economy must improve” to protect against these nation-state threats, Horne said. Ian Birdsey, partner and cyber specialist at law firm Clyde & Co, told TechRepublic in an email: “The UK has increasingly become a target for hostile nations due to the redrawing of geopolitical battle lines and the rise in global conflicts in recent years. In turn, threat actors based in those territories are increasingly launching more severe and sophisticated cyberattacks on UK organisations, particularly within critical national infrastructure and its supply chain.


5 JavaScript Libraries You Should Say Goodbye to in 2025

jQuery is the grandparent of modern JavaScript libraries, loved for its cross-browser support, simple DOM manipulation, and concise syntax. However, in 2025, it’s time to officially let go. Native JavaScript APIs and modern frameworks like React, Vue, and Angular have rendered jQuery’s core utilities obsolete. Not to mention, vanilla JavaScript now includes native methods such as querySelector, addEventListener, and fetch that more conveniently provide the functionality we once relied on jQuery to deliver. Also, modern browsers have standardized, making the need for a cross-browser solution like jQuery redundant. Not to mention, bundling jQuery into an application today can add unnecessary bloat, slowing down load times in an age when speed is king. ... Moment.js was the default date-handling library for a long time, and it was celebrated for its ability to parse, validate, manipulate, and display dates. However, it’s now heavy and inflexible compared to newer alternatives, not to mention it’s been deprecated. Moment.js clocks in at around 66 KB (minified), which can be a significant payload in an era where smaller bundle sizes lead to faster performance and better UX.


How media, publishing and entertainment organizations can master Data Governance in the age of AI

One of the reasons AI governance has proven to be such a challenging new discipline is that it’s so multifaceted. Tiankai explained that it’s comprised of several key elements: Ownership and stewardship: AI models need ownership, and so does AI governance. The right people must be accountable for ensuring AI models are used in the right ways. Cross-functional decision-making: A cross-domain thinking and decision-making model is essential. One central function can’t make every AI-relevant governance decision, so you need ways to bring the accountable people together. Processes and metadata: Teams must make their models explainable, so everyone can understand the quality of their outputs and the root causes of any negative outcomes. Technology enablement: Technology must support governance frameworks and make them work at scale. This shows that AI governance requires a combination of people, process and technology change. The panel agreed that the ‘people’ element is the toughest to manage effectively. Nathalie Berdat, Head of Data and AI Governance, BBC, explained some of the people-specific challenges that she has encountered along its AI governance journey. 


5 ways to tell people what to do at work

Nick Woods, CIO of airport group MAG, said dialogue is the priority for any professional who wants to avoid ambiguity. "If you're telling somebody what to do, you're already in the wrong place," he said. "Success is about a coaching, conversational dialogue that you need to have that ultimately comes down to a handshake on, 'Are we clear on what's next?'" Woods told ZDNET that most management decisions involve an ongoing debate. He doesn't believe in being directive about outputs and telling people what they need to go and do. "I think I'm much more in a space of, 'Actually, I've hired good people. I'm going to allow you to go and tell me what we need to do, and then we're going to have a dialogue about it,'" he said. ... Niall Robinson, head of product innovation at the Met Office, said talented staff should be given space to express their creativity. "There's a temptation as a leader to tell people how to do stuff -- and that can be a trap," he said. Robinson told ZDNET that he focuses on avoiding that problem by trusting his staff to generate recommended actions. "A habit I've been trying to practice is to tell people what success looks like and then giving them the agency to describe the options to me because they're closer to many of the solutions. So, success is about giving people the power to advise me."


Navigating NextGen Enterprise Architecture with GenAI

GenAI can modernize technology architecture by facilitating optimal best-of-breed solutions selection based on diverse criteria deep analyses. It offers tailored guidance aligned with business requirements as well as key capabilities such as scalability, resilience, and reversibility. This dynamic capacity adapts to evolving IT landscapes and business requirements, continuously refining recommendations based on the changing need and technological state-of-art. Moreover, GenAI accelerates homemade solutions development by generating code snippets. It produces-free functions and classes code segments written in any programming language, which improves efficiency and reduces manual coding efforts. This capacity improves developers' productivity and allows teams to focus more on high-level design. It also ensures that generated code is aligned with coding standards related to maintainability, readability, collaboration, and consistency. GenAI has amazing advantages, but it also has some major challenges. One of them is sustainability issues, which are increasingly important in technology adoption. In fact, many enterprises take this criterion into account in their technology architecture principles and assess it when they select a new solution to enhance their IT landscape.


The 7 R's of cloud migration: How to choose the right method

The R's model isn't new, but it has evolved significantly over the years. Its genesis is usually attributed to Gartner, who came up with the 5 R's model back in 2010. The original five were rehost, refactor, revise, rebuild and replace. As the cloud continued to evolve and more diverse workloads were being migrated to the cloud, AWS added a sixth R -- retire -- and eventually, a seventh, for retain. This seventh R is effectively an acknowledgment that not all workloads are suited to being hosted in the cloud. ... Rehosting can be done in a few ways, but it often means creating cloud-based virtual machines that mimic the infrastructure an application is currently running on. ... Rehosting an application requires you to create a cloud VM instance and then move the application onto that instance. Relocating, on the other hand, involves moving an existing VM from an on-premises environment to the cloud without making significant changes to it. ... A workload might be suitable for retirement if it is no longer actively supported by the vendor. In such cases, it's important to make sure you have a workaround before retiring an application the organization still uses. That might mean adopting a competing application that offers similar functionality or developing one in-house.


Evolving Your Architecture: Essential Steps and Tools for Modernization

Tech debt, lack of modernization can also get you out there in the news, and not as a very good thing, as we could see for SWA a couple years ago when they had a pretty huge meltdown with their booking systems and all that. It damaged their image, but also got them pretty down on their plans in revenue and all that, and still, nowadays they are facing the consequences of that meltdown, which was basically because of ignoring and putting aside the conversations about tech debt and application modernization as a whole. ... It's basically looking at the inventory of applications that you have in your organization, and understanding, what are the critical ones? What is the value that it adds? Alignment with the business goals. Really like, is it commodity? Can I just go and buy one out of the shelf, two? Then it's fine, go and buy it. If it's something that differentiates you, you got to innovate, then it might be worth looking at building it and hence modernizing it. ... The other thing is the age of technology. If you have outdated technology, you very likely have vulnerabilities. If you have lack of support, either from the community or the vendors, there is a security vulnerability there, but there is no security patch being released because there is no support anymore.



Quote for the day:

"Do something today that your future self will thank you for." -- Unknown

Daily Tech Digest - April 24, 2024

The shift towards a combined framework for API threat detection and the protection of vital business applications signals a move to proactive and responsive security. ... Companies cannot afford to underestimate the threat bots pose to their API-driven applications and infrastructure. Traditional silos between fraud and security teams create dangerous blind spots. Fraud detection often lacks visibility into API-level attacks, while API security tools may overlook fraudulent behavior disguised as legitimate traffic. This disconnect leaves businesses vulnerable. By integrating fraud detection, API security, and advanced bot protection, organizations create a more adaptive defense. This proactive approach offers crucial advantages: swift threat response, the ability to anticipate and mitigate vulnerabilities exploited by bots and other malicious techniques, and an in-depth understanding of application abuse patterns. These advantages lead to more effective threat identification and neutralization, combating both low-and-slow attacks and sudden volumetric attacks from bots.


Fortifying the Software Supply Chain

Firstly, it enhances security and compliance by consolidating code repositories in a single, cloud-based platform. This allows organizations to gain better control over access permissions and enforce consistent security policies across the entire codebase. Centralized environments can be configured to comply with industry standards and regulations automatically, reducing the risk of breaches that could disrupt the supply chain. As Jen Easterly, Director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), emphasizes, centralizing source code in the cloud aligns with the goal of working with the open source community to ensure secure software while reaping its benefits. Secondly, cloud-based centralization fosters improved collaboration and efficiency among development teams. With a centralized platform, teams can collaborate in real-time, regardless of their geographical location, facilitating faster decision-making and problem-solving. ... Thirdly, centralized cloud environments offer enhanced reliability and disaster recovery capabilities. Cloud providers typically replicate data across multiple locations, ensuring that a failure in one area does not result in data loss.


GenAI can enhance security awareness training

Social engineering is fundamentally all about psychology and putting the victim in a situation where they feel under pressure to make a decision. Therefore, any form of communication that imparts a sense of urgency and makes an unusual request needs to be flagged, not immediately responded to, and subjected to a rigorous verification process. Much like the concept of zero trust, the approach should be “never trust, always verify”, and the education process should outline the steps that should be taken following an unusual request. For instance, in relation to CFO fraud, the accounts department should have a set limit for payments and exceeding these should trigger a verification process. This might see staff use a token-based system or authenticator to verify the request is legitimate. Secondly, users need to be aware of oversharing. Is there a company policy to prevent information being divulged over the phone? Restrictions on the posting of company photos that an attacker could exploit? Could social media posts be used to guess passwords or an individual’s security questions? Such steps can reduce the likelihood of digital details being mined.


AI and Human Capital Management: Bridging the Gap Between HR and Technology

Amid the strides of technology, the relevance of the human factor remains important for the human resource professional. They grapple with the challenge of seamlessly blending advanced technology with the distinctive human elements that define their workforce. In this contemporary landscape, HR assumes a novel role as a vital link between the embrace of technology and the preservation of human relations. The shift to this new way of working requires an appropriate use of technology to support and enhance existing HR capabilities, thereby increasing their flexibility and effectiveness. ... Ethical considerations comprise one of the most significant challenges while applying AI in HR, which mandates the implementation of mechanisms like equal opportunities, fair decision-making, and transparency in AI utilization in carrying out the duties. Additionally, the integration of AI into HR operations necessitates modifying change management processes to provide reassurance, organize skill preparation and training programs, and cultivate the acceptance of AI in the organization. 


Differential Privacy and Federated Learning for Medical Data

Federated learning is a key strategy to build that trust backed up by the technology, not only on contracts and faith in ethics of particular employees and partners of the organizations forming consortia. First of all, the data remains at the source, never leaves the hospital, and is not being centralized in a single, potentially vulnerable location. Federated approach means there aren’t any external copies of the data that may be hard to remove after the research is completed. The technology blocks access to raw data because of multiple techniques that follow defense in depth principle. Each of them is minimizing the risk of data exposure and patient re-identification by tens or thousands of times. Everything to make it economically unviable to discover nor reconstruct raw level data. Data is minimized first to expose only the necessary properties to machine learning agents running locally, PII data is stripped, and we also use anonymization techniques. Then local nodes protect local data against the so-called too curious data scientist threat by allowing only the code and operations accepted by local data owners to run against their data.


CIO risk-taking 101: Playing it safe isn’t safe

As CIO, you’re in the risk business. Or rather, every part of your responsibilities entails risk, whether you’re paying attention to it or not. And in spite of the spate of books that extol risk-taking as the only smart path, it’s worth remembering that their authors don’t face what might be the biggest risk CIOs have to deal with every day: executive teams adept at preaching risk-taking without actually supporting it. ... ut the staff members and sponsor who annoy you the most in charge of these initiatives. Worst case they succeed, and people you don’t like now owe you a favor or two. Best case they fail and will be held accountable. You can’t lose. ... Those who encourage risk-taking often ignore its polysemy. One meaning: initiatives that, as outlined above, have potential benefit but a high probability of failure. The other: structural risks — situations that might become real and would cause serious damage to the IT organization and its business collaborators if they do. You can choose to not charter a risky initiative, ignoring and eschewing its potential benefits. When it comes to structural risks you can ignore them as well, but you can’t make them go away by doing so and will be blamed if they’re “realized”


Digital Personal Data Protection Act, 2023 - Impact on Banking Sector Outsourced Services

Regulated Entities must design their own privacy compliance program for application-based services – and not solely rely on a package provided by the SP. While the SP may add value and save costs, any solution it provides will likely be optimized for its own efficiency. Customer data management practices can differentiate a business from competitors, enhance customer trust, and provide a competitive advantage. Also, financial penalties under the DPDP are high, extending up to INR 250 crores (on the Regulated Entity as the ‘data fiduciary’ and not on the processor), apart from the reputational damage a breach or prosecution can cause, making it critical to have thorough oversight over the SP vis-à-vis privacy protection. ... For consumer facing services, in addition to security, SPs must technically ensure that the client can comply with its DPDP obligations such as data access requests, erasure, correction and updating personal data, consent withdrawal. Also, the platform should be capable of integrating with consent managers.


Why Is a Data-Driven Culture Important?

Trust and commitment are two important features in a data-driven culture. Trust in the data is exceptionally important, but trust in other staff, for purposes of collaboration and teamwork, is also quite important. Dealing with internal conflicts and misinformation disrupts the smooth flow of doing business. There are a number of issues to consider when creating a data-driven culture. ... In a data-driven culture, everyone should be involved, and this should be communicated to staff and management (exceptions are allowed, for example, the janitor). Everyone using data in doing their job should understand they are also creating data that can be used later for research. When people understand their roles, they can work together as an efficient team to find and eliminate sources of bad data. The process of locating and repairing sources of poor-quality data acts as an educational process for staff and empowers them to be proactive, taking responsibility when they notice a data flow problem. Shifting to a data-driven culture may result in having to hire a few specialists – individuals who are skilled in Data Management, data visualization, and data analysis. 


Harnessing the Collective Ignorance of Solution Architects

One key benefit of adopting an architecture platform, and having different development teams contribute and maintain their designs in a shared model, is that higher levels of abstraction can gain a wide-angled view of the resulting picture. At the context level, the view becomes enterprise-wide, with an abstracted map of the entire application landscape, and how it is joined up, both from an IT perspective, and to its consuming organizational units, revealing its contribution to business products and services, value streams and business capabilities. ... by combining the abstracting power of a modern EA platform with the consistency and integrity of the C4 model approach, I have removed from their workload the Sysyphean task of hand-crafting an enterprise IT model and replaced it with the “collective ignorance” of an army of supporters who will construct and maintain that enterprise view out of their own interest. Guidance and encouragement are all that is required. The model will remain consistent with the aggregated truth of all solution designs because they are one and the same model, viewed from different angles with different degrees of “selective ignorance”.


5 Hard Truths About the State of Cloud Security 2024

"There's a fundamental misunderstanding of the cloud that somehow there's more security natively built into it, that you're more secure by going to the cloud just by the act of going to the cloud," he says. The problem is that while hyperscale cloud providers may be very good at protecting infrastructure, the control and responsibility over their customer's security posture they have is very limited. "A lot of people think they're outsourcing security to the cloud provider. They think they're transferring the risk," he says. "In cybersecurity, you can never transfer the risk. If you are the custodian of that data, you are always the custodian of the data, no matter who's holding it for you." ... "So much of the zero trust narrative is about identity, identity, identity," Kindervag says. "Identity is important, but we consume identity in policy in zero trust. It's not the end-all, be-all. It doesn't solve all the problems." What Kindervag means is that with a zero trust model, credentials don't automatically give users access to anything under the sun within a given cloud or network. The policy limits exactly what and when access is given to specific assets. 



Quote for the day:

"Great achievers are driven, not so much by the pursuit of success, but by the fear of failure." -- Larry Ellison

Daily Tech Digest - February 15, 2024

CISO and CIO Convergence: Ready or Not, Here It Comes

While CIOs are still responsible for setting and meeting technology goals and for staying on budget, their primary mandate is determining how the company can harness technology to innovate, and then procure and manage those resources. While plenty of companies still maintain large, on-premise IT estate, it's just a matter of time before they digitally transform. Either way, the CIO role has become markedly less operational over time. On the other hand, the profile of CISOs has been growing since the early 2000s, set against a non-stop carousel of compliance mandates, data breaches, and emerging cybersecurity threats. While data breaches may have forced businesses to pay attention to security, it was compliance mandates that funded it. From HIPAA and PCI DSS to GDPR, SOC 2, and more, compliance has been a double-edged sword for CISOs. Compliance increased the role of cybersecurity teams and made them more visible across IT and the business as a whole, providing CISOs with bigger budgets and increased latitude on how to spend it. However, all the effort they put into compliance did little to stymie phishing, ransomware, big breaches, and/or malicious insiders. 


Will Generative AI Kill DevSecOps?

Beyond having automation and guardrails in place, you also need security policies at the company level, Moisset said, to make sure that DevSecOps understands all the generative AI tools colleagues are using. Then you can educate them on how to use it, like creating and communicating a generative AI policy. Because a total ban on GenAI just won’t fly. When Italy temporarily banned ChatGPT, Foxwell said there was a visible decrease in productivity across the country’s GitHub organizations, but, when it was reinstated, “what also picked up was the usage of tools that circumvented all of the government policies and firewalls around the prevention of using these” tools. Engineers always find a way. Particularly when using generative AI for customer service chatbots, Moisset said, you need guardrails in place around both the inputs and outputs, as malicious actors can potentially “socialize” the chatbot via prompt injection to give a desired result — like when someone was able to buy a Chevy for $1 from a chatbot. “It’s back to educating the users and developers that it’s good to use AI, we should be using AI, but we need to actually put guardrails around it,” she said, which also demands an understanding of how your customers interact with GenAI.


Combining heat and compute

Data centers offer a predictable supply of heat because they keep their servers running continuously. But the heat is “low-grade:” It is warm rather than hot, and it comes in the form of air, which is difficult to transport. So, most data centers vent their heat to the atmosphere. Sometimes, there are district heat networks, which provide warmth to local homes and businesses through a piped network. If your data center is near one of these, it is a matter of extending it to connect to the data center, and boosting the grade of heat. But you have to be in the right place to connect to one. “There are certain countries that have established or developing heat networks, but the majority don't have a heat network per se, so it's going on a piecemeal basis,” Neal Kalita, senior director of power and energy at NTT, tells DCD. You are unlikely to find one in the US, says Rolf Brink of cooling consultancy Promersion: “The United States is a fundamentally different ecosystem. But Europe is a lot more dense in terms of population, and there is more heat demand.” The Nordic countries have a lot of heat networks. Stockholm Data Parks is a well-known example - a data center campus in urban Stockholm, where every data center has a connection to the district heating network and gets paid for its heat.


Harmonizing human potential and AI: The evolution of work in the digital era

The evolving landscape of work is witnessing a profound transformation as the fusion of human potential with AI takes center stage. Concerns about the ethical implications of AI are well-known, including the potential for perpetuating bias and discrimination and its impact on employment and job security. Ensuring that AI is developed and deployed ethically and responsibly is crucial, taking into account fairness, transparency and accountability. ... Optimizing human-centric capabilities with automation and an AI-first mindset is significant for long-term success. Consider a telecoms operator with employees struggling to grapple with the labor-intensive process of manually reviewing a high volume of mobile tower lease contracts. By embracing an AI-powered platform equipped with capabilities for faster and more accurate extraction of contract clauses, employees were able to shift their focus toward leveraging hidden risks identified by the platform. This enabled the renegotiation of existing contracts, leading to millions of dollars in savings. It’s no coincidence that the enterprises that are more inclined to augment human potential are those resilient enough to maximize the value of AI-led transformations. 


5 Wi-Fi vulnerabilities you need to know about

Like wired networks, Wi-Fi is susceptible to Denial of Service (DoS) attacks, which can overwhelm a Wi-Fi network with excessive amount of traffic. This can cause the Wi-Fi to become slow or unavailable, disrupting normal operations of the network, or even the business. A DoS attack can be launched by generating a large number of connection or authentication requests, or injecting the network with other bogus data to break the Wi-Fi. ... Wi-jacking occurs when a Wi-Fi-connected device has been accessed or taken over by an attacker. The attacker could retrieve saved Wi-Fi passwords or network authentication credentials on the computer or device. Then they could also install malware, spyware, or other software on the device. They could also manipulate the device’s settings, including the Wi-Fi configuration, to make the device connect to rogue APs. ... RF interference can cause Wi-Fi disruptions. Instead of being caused by bad actors, RF interference could be triggered by poor network design, building changes, or other electronics emitting or leaking into the RF space. Interference can result in degraded performance, reduced throughput, and increased latency.


AI outsourcing: A strategic guide to managing third-party risks

Bias may persist in many face detection systems. Naturally, this misidentification could have severe consequences for the parties involved. Diverse training data and transparent algorithms are necessary to mitigate the risk of discriminatory outcomes. Furthermore, complex AI models often encounter the “black box” problem or how some AI models arrive at their decisions. Teaming with a third-party AI service requires human oversight to navigate the threat of biased algorithms. ... Most of us can admit that the risk of becoming overly reliant on AI is significant. AI can quickly become a go-to solution for many challenges. It’s no surprise that companies face a similar risk, becoming too dependent on a single vendor’s AI solutions. However, this approach can become problematic. Companies can “get stuck,” and switching providers seems almost impossible. ... Quality and reliability concerns are top-of-mind for most company leaders partnering with third-party AI services. Some primary concerns include service outages, performance issues, and unexpected disruptions. Operational resilience is necessary, and contingency plans are a significant piece of the resiliency puzzle, given the damage business downtime can cause. 


Practices for Implementing an Effective Data Governance Strategy

Ensuring the integrity and usability of data within an organization requires implementing clear data quality standards and metrics. These standards serve as a benchmark for data quality, guiding data management practices and ensuring that data is accurate, complete, and reliable. Organizations can streamline their data governance processes by defining what constitutes quality data, making it easier to identify and rectify data issues. This approach enhances data quality, supports compliance with regulatory requirements, and improves decision-making capabilities. Developing a comprehensive set of data quality metrics is crucial for monitoring and maintaining high data standards. These metrics should be aligned with the organization’s strategic objectives and include criteria such as accuracy, completeness, consistency, timeliness, and uniqueness. ... Creating an environment where data stewardship and accountability are at the forefront requires strategic planning and commitment from all levels of an organization. It is essential to embed data governance principles into the corporate culture, ensuring that every team member understands their role in maintaining data integrity and security.


What is the impact of AI on storage and compliance?

Right now, when you look at traditional storage, generally speaking you look at your environment, your ecosystem, your data, classifying that data, and putting a value on it. And, depending on that value and the potential impact, you put in the right security and assign the length of time you need to keep the data and how you keep it, delete it. But, if you look at a CRM [customer relationship management service], if you put the wrong data in then the wrong data comes out, and it’s one set of data. So, to be blunt, garbage in, garbage out. With AI, it’s much more complex than that, so you may have garbage in, but instead of one dataset out that might be garbage, there might be a lot of different datasets and they may or may not be accurate. If you look at ChatGPT, it’s a little bit like a narcissist. It’s never wrong and if you give it some information and then it spits out the wrong information and then you say, “No, that’s not accurate”, it will tell you that’s because you didn’t give it the right dataset. And then at some stage it will stop talking to you, because it will have used up all its capability to argue with you, so to speak. From a compliance perspective, if you are using AI – a complicated AI or a simple AI like ChatGPT – to create a marketing document, that’s OK.


How to Get Your Failing Data Governance Initiatives Back on Track

Data governance is a big lift. Organizations might make the mistake of attempting to roll the initiative out across the entire enterprise without building in the steps to get there. “If you make it too broad and end up not focusing on short-term goals that you can demonstrate to keep the funding going, these engagements [tend] to fail,” says Prasad. Organizational issues are some of the major stumbling blocks standing in the way of successful data governance, but there can also be technical obstacles. Reiter points to the importance of leveraging automation. If an enterprise team attempts to manually undertake data governance mapping, it could be irrelevant by the time it is completed. ... Documentation, or lack thereof, can be a good indicator of a data governance initiatives' progress and sustainability. “As things are changing over time and documentation isn’t updated, that's a great sign that governance is not maintainable,” Holiat says. Getting feedback from end users can alert data governance leaders to issues standing in the way of adoption. Are people throughout the organization frustrated with the data governance program? Does it facilitate their access to data, or is it making their jobs more difficult?


Adopting AI with Eyes Wide Open

For businesses in general, AI can increase efficiency, make the workplace safer, improve customer service, create competitive advantage and lead to new business models and revenue streams. But like any technological innovation, AI has its risks and challenges. At the heart of AI is code and data; code can (and often does) contain bugs, and data can (and often does) contain anomalies. But that is no different to the technological innovations that we have embraced to-date. Arguably, the risks and challenges of AI are greater – not least of all because of the potential breadth of its application – and they include (but are certainly not limited to): overreliance, lack of transparency, ethical concerns, security, and regulatory and statutory challenges which typically lag behind the pace of progress. So, what does have this to do with strategy and architecture, and in particular digital transformation? Too often in organizations, new technologies are rushed in, in the belief that there is no time to lose. Before you know it, the funds and resources have been found to embark on an initiative (programme or project) to adopt it, spearheading the way to the future. It is the future! 



Quote for the day:

"I find that the harder I work, the more luck I seem to have." -- Thomas Jefferson

Daily Tech Digest - February 03, 2024

NCA’s Plaggemier on Finding a Path to Data Privacy Compliance

On the international stage, companies are becoming more aware of the more active and robust policies they may face and the penalties they can carry. That has led to some patterns, Plaggemier says, developing around what is reasonable for companies to enact in relation to their sector and industry. “Do you have security or privacy tools or practices in place that are in line with your competitors?” she asks. While such an approach might be considered reasonable at first, competitors might be way ahead with much more mature programs, Plaggemier says, possibly making copying rivals no longer a reasonable approach and compelling companies to find other ways to achieve compliance. Data privacy regulations continue to gain momentum, and she believes it will be interesting to see what further kind of enforcement actions develop and how the courts in California, for example, manage. As CCPA and other state-level regulations continue into their sophomore eras, Plaggemier says at least a few more states seem likely to get on the bandwagon of data privacy regulation. Meanwhile, there is also some growing concern about how AI may play a role in potential abuses of data in the future.


What Is Enterprise Architecture? (And Why Should You Care About It)

Ideally, Enterprise Architecture supplies the context and insight to guide Solution Architecture. To address broad considerations, and align diverse stakeholder viewpoints, Enterprise Architecture often needs to be broader, less specific, and often less technical than Solution Architecture. ... Done well, Enterprise Architecture should provide long-term guidance on how different technology components support overall business objectives. It should not prescribe how technology is, or should be, implemented, but rather provide guardrails that help inform design decisions and prioritization. Additionally, most organizations have several technology components that support business operations; Salesforce is usually just one. Understanding how the various technology components work together will enable you to be a well-informed contributing member of a larger team. EA can help to provide valuable context about how Salesforce interacts with other systems and might spark ideas on how Salesforce specifically can be better utilized to support an organization.


AnyDesk says hackers breached its production servers, reset passwords

In a statement shared with BleepingComputer late Friday afternoon, AnyDesk says they first learned of the attack after detecting indications of an incident on their product servers. After conducting a security audit, they determined their systems were compromised and activated a response plan with the help of cybersecurity firm CrowdStrike. AnyDesk did not share details on whether data was stolen during the attack. However, BleepingComputer has learned that the threat actors stole source code and code signing certificates. The company also confirmed that the attack did not involve ransomware but didn't share too much information about the attack other than saying their servers were breached, with the advisory mainly focusing on how they responded to the attack. As part of their response, AnyDesk says they have revoked security-related certificates and remediated or replaced systems as necessary. They also reassured customers that AnyDesk was safe to use and that there was no evidence of end-user devices being affected by the incident. "We can confirm that the situation is under control and it is safe to use AnyDesk. 


The Ultimate 7-Step CEO Guide to Visionary Leadership

Unlike strategic objectives, which are rationally derived, visions are values-laden. They give meaning through an ideological goal. Since they are about what should be, they are, by definition, an expression of values and corporate identity. Thus, effective CEOs keep the vision malleable in relation to the business landscape but never change the values underneath. Not only that, but their personal values align with the organization and its vision — one reason for doing a values assessment in CEO succession. ... Some of the most catastrophic events in history have been the result of a psychopath's vision. Visions can be powerful, influential and morally corrupt — all at the same time. Conversely, real leaders create a vision that benefits the entire ecosystem, where the rising tide lifts all boats and makes the world a better place. Robert House, from the University of Pennsylvania, defined a greater good vision as "an unconscious motive to use social influence, or to satisfy the power need, in socially desirable ways, for the betterment of the collective rather than for personal self-interest." This is using the will to power for the betterment of humanity, to shape the future, rather than as a source of ruthless evildoing.


AI Revolutionizes Voice Interaction: The Dawn Of A New Era In Technology

So what can we do to make sure we’re ready for this universal shift to voice-controlled tech and having natural language conversations with machines? Dengel suggests the answer lies in meeting the challenge head-on. This means drawing together teams made of technologists, engineers, designers, communications experts and business leaders. Their core focus is to identify opportunities and potential risks to the business, allowing them to be managed proactively rather than reactively. “That’s always the first step,” he says, “because you start defining what’s possible, but you’re doing it in the context of what’s realistic as well because you’ve got your tech folks involved as well … ” It’s a “workshop” approach pioneered by Apple and adopted by various tech giants that have found themselves at the forefront of an emerging wave of transformation. But it’s equally applicable to just about any forward-looking business or organization that doesn’t want to be caught off-guard. Dengel says that addressing a group of interns recently, he told them, “I wish I were in your shoes – the next five years is gonna be more innovation than there’s been in the last five or maybe the last 20 years


Level up: Gamify Your Software Security

Gamification has been a great way to increase skills across the industry, and this has become particularly important as adversaries become more sophisticated and robust security becomes a critical piece to business continuity. ... We all love our extrinsic motivators, whether it’s stars or our green squares of activity on GitHub or even our badges and stickers in forums and groups. So why not create a reward system for security too? This makes it possible for developers to earn points, badges or status for successfully integrating security measures into their code, recognizing their achievements. ... Just as support engineers are often rewarded for the speed and volume of tickets they close, similar ideas can be used to advance security practices and hygiene in your organization. Use leaderboards to encourage a healthy competitive spirit and recognize individuals or teams for exceptional security contributions. ... This is in addition to the badges and other rewards mentioned above. I’ve seen recognition programs for other strategic initiatives in organizations, such as “Top Blogger” or “Top Speaker” and even special hoodies or swag awarded to those who achieve the title, giving it exclusivity and prestige.


802.11x: Wi-Fi standards and speeds explained

The big news in wireless is the expected ratification of Wi-Fi 7 (802.11be) by the IEEE standards body early this year. Some vendors are already shipping pre-standard Wi-Fi 7 gear, and the Wi-Fi Alliance announced in January that it has begun certifying Wi-Fi 7 products. While the adoption of Wi-Fi 7 is expected to have the most impact on the wireless market, the IEEE has been busy working on other wireless standards as well. In 2023 alone, the group published 802.11bb, a standard for communication via light waves; 802.11az, which significantly improves location accuracy; and 802.11bd for vehicle-to-vehicle wireless communication. Looking ahead, IEEE working groups are tackling new technology areas, such as enhanced data privacy (802.11bi), WLAN sensing (802.11bf), and randomized and changing MAC addresses (802.11bh). In addition, the IEEE has established special-interest groups to investigate the use of ambient energy harvested from the environment, such as heat, to power IoT devices. There’s a study group looking at standards for high-throughput, low-latency applications such as augmented reality/virtual reality. Another group is developing new algorithms to support AI/ML applications.


What is AI networking? Use cases, benefits and challenges

AI networking can optimize IT service management (ITSM) by handling the most basic level 1 and level 2 support issues (like password resets or hardware glitches). Leveraging NLP, chatbots and virtual agents can field the most common and simple service desk inquiries and help users troubleshoot. AI can also identify higher-level issues that go beyond step-by-step instructions and pass them along for human support. AI networking can also help reduce trouble ticket false-positives by approving or rejecting tickets before they are acted on by the IT help desk. This can reduce the probability that human workers will chase tickets that either weren’t real problems in the first place, were mistakenly submitted or duplicated or were already resolved. ... AI can analyze large amounts of network data and traffic and perform predictive network maintenance. Algorithms can identify patterns, anomalies and trends to anticipate potential issues before they degrade performance or cause unexpected network outages. IT teams can then act on these to prevent — or at least minimize — disruption. AI networking systems can also identify bottlenecks, latency issues and congestion areas. 


Low-Power Wi-Fi Extends Signals Up to 3 Kilometers

Morse Micro has developed a system-on-chip (SoC) design that uses a wireless protocol called Wi-Fi HaLow, based on the IEEE 802.11ah standard. The protocol significantly boosts range by using lower-frequency radio signals that propagate further than conventional Wi-Fi frequencies. It is also low power, and is geared toward providing connectivity for Internet of Things (IoT) applications. To demonstrate the technology’s potential, Morse Micro recently conducted a test on the seafront in San Francisco’s Ocean Beach neighborhood. They showed that two tablets connected over a HaLow network could communicate at distances of up to 3 km while maintaining speeds around 1 megabit per second—enough to support a slightly grainy video call. ... “It is pretty unprecedented range,” says Prakash Guda, vice president of marketing and product management at Morse Micro. “And it’s not just the ability to send pings but actual megabits of data.” The HaLow protocol works in much the same way as conventional Wi-Fi, says Guda, apart from the fact that it operates in the 900-megahertz frequency band rather than the 2.4-gigahertz band. 


How to Make the Most of In-House Software Development

Maintaining an in-house software development team can be tough. You must hire skilled developers – which is no easy feat in today’s economy, where talented programmers remain in short supply – and then manage them on an ongoing basis. You must also ensure that your development team is nimble enough to respond to changing business needs and that it can adapt as your technology stack evolves. Given these challenges, it’s no surprise that most organizations now outsource application development instead of relying on in-house teams. But I’m here to tell you that just because in-house development can be hard doesn’t mean that outsourcing is always the best approach. On the contrary, IT organizations that choose to invest in in-house development for some or all the work can realize lower overall costs and a competitive advantage by creating domain-specific expertise. Keeping development in-house can help organizations address unique security requirements and maintain full control over the development lifecycle and roadmaps. For businesses with specialized technology, security and operational needs, in-house development is often the best strategy.



Quote for the day:

“The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself.” -- Mark Caine

Daily Tech Digest - January 23, 2024

How human robot collaboration will affect the manufacturing industry

Traditional manufacturing systems frequently struggle to adjust to shifting demands and product variances. Human-robot collaboration gives flexibility, which is critical in today’s market. Robots are easily programmed and reprogrammed, allowing firms to quickly alter production lines to suit new goods or design changes. This adaptability is critical in an era where customer preferences shift quickly, and companies are trying to work in line with the shifting preferences of the customers. ... While the initial investment in robotics technology may be significant, the long-term cost savings from human-robot collaboration are attractive. Automated procedures in the manufacturing industries lower labor costs, boost productivity, and reduce errors to a great extent, resulting in a more cost-effective manufacturing operation. ... There is a notion that automation will replace human occupations, on the contrary, the collaboration is intended to supplement human abilities. Human workers may focus on critical thinking, problem-solving, and creativity by automating mundane and physically demanding jobs.


Mastering System Design: A Comprehensive Guide to System Scaling for Millions

Horizontal scaling emerges as a strategic solution to accommodate increasing demands and ensure the system’s ability to handle a burgeoning user base. Horizontal scaling involves adding more servers to the system and distributing the workload across multiple machines. Unlike vertical scaling, which involves enhancing the capabilities of a single server, horizontal scaling focuses on expanding the server infrastructure horizontally. One of the key advantages of horizontal scaling is its potential to improve system performance and responsiveness. By distributing the workload across multiple servers, the overall processing capacity increases, alleviating performance bottlenecks and enhancing the user experience. Moreover, horizontal scaling offers improved fault tolerance and reliability. The redundancy introduced by multiple servers reduces the risk of a single point of failure. In the event of hardware issues or maintenance requirements, traffic can be seamlessly redirected to other available servers, minimizing downtime and ensuring continuous service availability. Scalability becomes more flexible with horizontal scaling. 


Backup admins must consider GenAI legal issues -- eventually

LLMs requiring a massive amount of data and, by proxy, dipping into nebulous legal territory is inherent to GenAI services contracts, said Andy Thurai, an analyst at Constellation Research. Many GenAI vendors are now offering indemnity or other legal protections for customers. ... "It's a [legal] can of worms that enterprises can't afford to open," Thurai said. Unfortunately for enterprise legal teams, the need to create guidance is fast approaching. Lawsuits by organizations such as the New York Times are looking to take back IP control and copyright from the OpenAI's proprietary and commercial LLM model. Those suits are entirely focused on the contents of data itself rather than the mechanics of backup and storage that backup admins would concern themselves with, said Mauricio Uribe, chair of the software/IT and electrical practice groups at law firm Knobbe Martens. The business advantages of GenAI within backup technology are still unproven and unknown, he added. Risks such as patent infringement remain a possibility. Backup vendors are implementing GenAI capabilities such as support chatbots into their tools now, such as Rubrik's Ruby and Cohesity's Turing AI. But neither incorporates enterprise customer data or specific customer information, according to both vendors.


CFOs urged to reassess privacy budgets amid rising data privacy concerns

The ISACA Privacy in Practice 2024 survey report reveals that only 34% of organizations find it easy to understand their privacy obligations. This lack of clarity can lead to non-compliance and increased risk of data breaches. Additionally, only 43% of organizations are very or completely confident in their privacy team’s ability to ensure data privacy and achieve compliance with new privacy laws and regulations. ... To address the challenges outlined in the survey, organizations are taking proactive steps to strengthen their privacy programs. Training plays a crucial role in mitigating workforce gaps and privacy failures. Half of the respondents (50%) note that they are training non-privacy staff to move into privacy roles, while 39% are increasing the usage of contract employees or outside consultants. Organizations are also investing in privacy awareness training for employees. According to the survey, 86% of organizations provide privacy awareness training, with 66% offering training to all employees annually. Moreover, 52% of respondents provide privacy awareness training to new hires. 


Cisco sees headway in quantum networking, but advances are slow

Cisco has said that it envisions quantum data centers that could use classic LAN models to tie together quantum computers, or a quantum-based network that transmits quantum bits (qubits) from quantum servers at high-speeds to handle commercial-grade applications. “Another trend will be the growing importance of quantum networking which in 4 or 5 years – perhaps more – will enable quantum computers to communicate and collaborate for more scalable quantum solutions,” Centoni stated. “Quantum networking will leverage quantum phenomena such as entanglement and superposition to transmit information.” The current path for quantum researchers and developers is to continue to grow radix, expand mesh networking (the ability for network fabrics to support many more connections per port and higher bandwidth), and create quantum switching and repeaters, Pandey said. “We want to be able to carry quantum signals over longer distances, because quantum signals deteriorate rapidly,” he said. “We definitely want to enable them to handle those signals within a data center footprint, and that’s technology we will start experimenting on.”


Navigating the Digital Transformation: The Role of IT

While many acknowledged engaging with the six core elements of the Rewired framework, few participants considered themselves frontrunners in significant progress. This underscores the complexity and ongoing nature of digital transformation, necessitating continuous adaptation across leadership, culture, and technology. Organizations are directing efforts towards both front-end (customer experience) and back-end (operational optimization), recognizing the interconnected nature of digital transformation. Success stories include consolidating Robotic Process Automation (RPE), Artificial Intelligence (AI), and low-code development within a single organizational department. This integration facilitates synergies and holistic advancements in digital capabilities. The evolving nature of ERP transformations was also discussed, with a shift towards continuous improvements and a focus on operating models and ways of working, moving beyond purely technological considerations. The insights from this roundtable underscore the multifaceted nature of digital transformation.


Harvard Scientists Discover Surprising Hidden Catalyst in Human Brain Evolution

“Brain tissue is metabolically expensive,” said the Human Evolutionary Biology assistant professor. “It requires a lot of calories to keep it running, and in most animals, having enough energy just to survive is a constant problem.” For larger-brained Australopiths to survive, therefore, something must have changed in their diet. Theories put forward have included changes in what these human ancestors consumed or, most popularly, that the discovery of cooking allowed them to garner more usable calories from whatever they ate. ... The shift was probably a happy accident. “This was not necessarily an intentional endeavor,” Hecht posited. “It may have been an accidental side effect of caching food. And maybe, over time, traditions or superstitions could have led to practices that promoted fermentation or made fermentation more stable or more reliable.” This hypothesis is supported by the fact that the human large intestine is proportionally smaller than that of other primates, suggesting that we adapted to food that was already broken down by the chemical process of fermentation. 


Digital Personal Data Protection Act marks a new era of business-friendly governance

Surprising the business community, the DPDP Act 2023 removed the data localization requirements, marking a significant departure from the previous iterations of the Act. The earlier DPDP Bills required certain categories of personal data to be stored and processed within the country. The provision faced staunch global opposition, particularly from the US, which criticized India's requirements as discriminatory and trade distortive. In contrast, the DPDP Act, 2023 adopts a more inclusive approach, granting firms autonomy in the choice and location of cloud services for storing and processing personal data of their users. By prioritizing cost-effectiveness and competitiveness for the firms, the removal of data localisation requirements signals a more accommodating government stance. In addition to scrapping data localization requirements, the DPDP Act 2023 also allows unrestricted cross-border transfer of Indian users’ personal data abroad, barring certain destination countries. Firms would not be required to conduct post-transfer impact assessments or to ensure that the destination country has similar data protection standards– mandated in other jurisdictions like the EU and Vietnam. 


Cybersecurity: The growing partnership between HR and risk management

HR professionals themselves can also be attractive targets to bad actors. The access they have to sensitive employee and company data can be a goldmine for hackers, putting a target on the back of those within the HR organization. As such, HR leaders should put proactive, pre-breach policies in place for their own functional colleagues. Policies might include contacting internal and external parties who ask for changes to sensitive information, such as invoice numbers, email passwords, direct deposit details, and software updates. They should also include policies for remote workers and incidence response. ... When you purchase cyber insurance, you get access to pre-breach planning and policy templates, which for many organizations, is just as important as the breach coverage. While the optimal amount of insurance depends on many factors — including size, revenues, number of employees and access to confidential information — HR organizations of all sizes and structures benefit from pre-breach planning and policymaking.


IT services spending signals major role change for CIOs ahead

“This evolution in what CIOs do, the value proposition they bring to the company, is evident in the long-term playout. But it is not yet as evident to the CIOs themselves,” Lovelock said. He sees CIOs still thinking they are riding the same talent waves of the past, facing a temporary problem that they will solve: that their staff will come back, that hiring will resume, that attrition rates will decline, and that they will be able to attract the skills they need at prices they can afford. “It doesn’t look like they will ever be able to do that. There are too many things IT staff with these key resources and skills are looking for that are outside of the CIO’s control to deliver,” he said. With increasing reliance on IT services and consulting to deliver outcomes ranging from commoditized customer support to differentiating generative AI implementations, the CIO role may soon become less about being that one-stop shop for business support, overseeing project and products developed in-house, and more about weaving together myriad services undertaken by an increasingly heterogeneous mix of talent sources, predominantly beyond the CIO’s direct purview.



Quote for the day:

''Thinking is easy acting is difficult, and to put one's thoughts into action is the most difficult thing in the world.'' -- Johann Wolfgang von Goethe