Daily Tech Digest - April 30, 2024

Transform Data Leadership: Core Skills for Chief Data Officers in 2024

The chief data officer is tasked not only with unlocking the value of data but explaining the importance of data as the lifeblood of the company across all levels. They must be effective storytellers who can interpret data in such a way that business stakeholders take notice. An effective CDO pairs storytelling with the supporting data and makes it easy to share insights with stakeholders and get their buy-in. For instance, how effective a CDO is in getting departmental buy-in might boil down to the ongoing department "showback reports" they can produce. "Credibility and integrity are two other important traits in addition to effective communication skills, as it is crucial for the CDO to gain the trust of their peers," Subramanian says. ... To garner support for data initiatives and drive organizational buy-in, chief data officers must be able to communicate complex data concepts in a clear and compelling manner to diverse stakeholders, including executives, business leaders, and technical teams. "CDOs have to serve as a bridge between the tech and operational aspects of the organization as they work to drive business value and increase data literacy and awareness," says Schwenk.


Mind-bending maths could stop quantum hackers, but few understand it

The task of cracking much current online security boils down to the mathematical problem of finding two numbers that, when multiplied together, produce a third number. You can think of this third number as a key that unlocks the secret information. As this number gets bigger, the amount of time it takes an ordinary computer to solve the problem becomes longer than our lifetimes. Future quantum computers, however, should be able to crack these codes much more quickly. So the race is on to find new encryption algorithms that can stand up to a quantum attack. ... Most lattice-based cryptography is based on a seemingly simple question: if you hide a secret point in such a lattice, how long will it take someone else to find the secret location starting from some other point? This game of hide and seek can underpin many ways to make data more secure. A variant of the lattice problem called “learning with errors” is considered to be too hard to break even on a quantum computer. As the size of the lattice grows, the amount of time it takes to solve is believed to increase exponentially, even for a quantum computer.


Secure by Design: UK Enforces IoT Device Cybersecurity Rules

The connected-device law kicks in following repeat attacks against devices with known or easily guessable passwords, which have led to repeat distributed denial-of-service attacks that have affected major institutions, including the BBC as well as major U.K. banks such as Lloyds and the Royal Bank of Scotland. Officials said the law is designed not just for consumer protection but also to improve national cybersecurity resilience, including against malware that targets IoT devices, such as Mirai and its spinoffs, all of which can exploit default passwords in devices. Western officials have also warned that Chinese and Russian nation-state hacking groups exploit known vulnerabilities in consumer-grade network devices. U.S. authorities earlier this year disrupted a Chinese botnet used by a group tracked as Volt Typhoon, warning that Beijing threat actors used infected small office and home office routers to cloak their hacking activities. "It's encouraging to see growing emphasis on implementing best practices in securing IoT devices before they leave the factory," said Kevin Curran, a professor of cybersecurity at Ulster University in Northern Ireland.


Will New Government Guidelines Spur Adoption of Privacy-Preserving Tech?

“The risks of AI are real, but they are manageable when thoughtful governance practices are in place as enablers, not obstacles, to responsible innovation,” Dev Stahlkopf, Cisco’s executive VP and chief legal officer, said in the report. One of the big potential ways to benefit from privacy-preserving technology is enabling multiple parties to share their most valuable and sensitive data, but do so in a privacy-preserving manner. “My data alone is good,” Hughes said. “My data plus your data is better, because you have indicators that I might not see, and vice versa. Now our models are smarter as a result.” Carmakers could benefit by using privacy-preserving technology to combine sensor data collected from engines. “I’m Mercedes. You’re Rolls-Royce. Wouldn’t it be great if we combined our engine data to be able to build a model on top of that could identify and predict maintenance schedules better and therefore recommend a better maintenance schedule?” Hughes said. Privacy-preserving tech could also improve public health through the creation of precision medicine techniques or new medications. 


GQL: A New ISO Standard for Querying Graph Databases

The basis for graph computing is the property graph, which is superior in describing dynamically changing data. Graph databases have been widely used for decades, and only recently, the form has generated new interest in being a pivotal component in Large Language Model-based Generative AI apps. A graph model can visualize complex, interconnected systems. The downside of LLMs is that they are black boxes of a sort, Rathle explained. “There’s no way to understand the reasoning behind the language model. It is just following a neural network and doing it’s doing its thing,” he said. A knowledge graph can serve as external memory, a way to visualize how the LLM constructed its worldview. “So I can trace through the graph and see why it arrived with that answer,” Rathle said. Graph databases are also widely used in the health care companies for drug discovery and by aircraft and other manufacturers as a way to visualize complex system design, Rathle said. “You have all these cascading dependencies and that calculation works really well in the graph,” Rathle said. 


Microsoft deputy CISO says gen AI can give organizations the upper hand

One of the major promises is a reduction in the fraud that often occurs around clinical trials. Bad actors have a vested interest in whether the drug will pass FDA inspections – literally. In other words, falsified results and insider trading is a big risk. Applying AI-powered security to their operational technology in the manufacturing plant or lab can monitor the equipment and not only detect signs of failure, but alert the company to potential tampering. At the same time, they’re also looking at ways to improve drug and polymer research. “They build better products and shorten the cycle of go-to-market for drugs. That’s worth billions of dollars,” he said. “They have a patent that only lasts 10 years. If they can get to market faster, they can hold on to that market share more before it goes to the public.” But that transformation of the SOC is potentially the most impactful of use cases, especially as cybercriminals adopt generative AI and go to work without the guardrails that encumber organizations. “We’ve seen a dramatic adoption of what I would call open-source AI from attackers to be able to use and build models,” Bissell said. 


What IT leaders need to know about the EU AI Act

Lawyers and other observers of the EU AI Act point to a couple major issues that could trip up CIOs. First, the transparency rules could be difficult to comply with, particularly for organizations that don’t have extensive documentation about their AI tools or don’t have a good handle on their internal data. The requirements to monitor AI development and use will add governance obligations for companies using both high-risk and general-purpose AIs. Secondly, although parts of the EU AI Act wouldn’t go into effect until two years after the final passage, many of the details affecting regulations have yet to be written. In some cases, regulators don’t have to finalize the rules until six months before the law goes into effect. The transparency and monitoring requirements will be a new experience for some organizations, Domino Data Lab’s Carlsson says. “Most companies today face a learning curve when it comes to capabilities for governing, monitoring, and managing the AI lifecycle,” he says. “Except for the most advanced AI companies or in heavily regulated industries like financial services and pharma, governance often stops with the data.”


Standard Chartered CEO on why cybersecurity has become a 'disproportionately huge topic' at board meetings

Working together with the CISO team and the risk cyber team as well — of course, they're technical experts themselves, and I think they’ve done an excellent job of putting the technical program around our own defense mechanisms. But probably the most challenging thing was to get the broad business — the people who weren't technical experts — to understand what role they have to play in our cybersecurity defenses. ... It was really interesting to go through that exercise early on and see how unfamiliar some of the business heads were with what their own crown jewels were. Once you identify and are clear on what the crown jewel is, then you have to be part of the defense mechanism around those. Each one of these things costs money or reduces flexibility or, if done incorrectly, impacts the customer experience. Really working through that and getting them involved in structuring their business around the cyber risk, it's an ongoing process, I don't think we'll ever be done with that. We've made huge progress in the past six, seven years, I'd say, as cyber risks have increased. 


The 5 components of a DataOps architecture

If the data comes from multiple sources with timestamps, you can blend it into one database. Use the aggregated data to produce summarized data at various levels of granularity, such as daily aggregate reports. Databases can produce satellite tables, link them together via keys and automatically update them. For example, user ID is the key for personal information in a user transactions table. If the data is in a standardized field, use metadata to indicate the type of each field -- text, integer and category -- labels and size. ... DataOps is not just about the acquisition and use of business data. Archive rarely used or no longer relevant data, such as inactive accounts. Some satellite tables may need regular updates, such as lists of blocklisted IP addresses. Back up data regularly, and update lists of users who can access it. Data teams should decide the best schedule for the various updates and tests to guarantee continuous data integrity. The DataOps process may seem complex, but chances are you already have plenty of data and some of the components in place. Adding security layers may even be easy.


Kubernetes: Future or Fad?

If you asked somebody at the beginning of containerization, it was all about how containers have to be stateless, and what we do today. We deploy databases into Kubernetes, and we love it. Cloud-nativeness isn’t just stateless anymore, but I’d argue a good one-third of the container workloads may be stateful today (with ephemeral or persistent state), and it will keep increasing. The beauty of orchestration, automatic resource management, self-healing infrastructure, and everything in between is just too incredible to not use for “everything.” Anyhow, whatever happens to Kubernetes itself (maybe it will become an orchestration extension of the OCI?!), I think it will disappear from the eyes of the users. It (or its successor) will become the platform to build container runtime platforms. But to make that happen, debug features need to be made available. At the moment you have to look way too deep into Kubernetes or agent logs to find out and fix issues. The one who never had to find out why a Let’s Encrypt certificate isn’t updating may raise a hand now. To bring it to a close, Kubernetes certainly isn’t a fad, but I strongly hope it's not going to be our future either. At least not in its current incarnation.



Quote for the day:

''The best preparation for tomorrow is doing your best today.'' -- H. Jackson Brown, Jr.

Daily Tech Digest - April 29, 2024

The end of vendor-backed open source?

Considering what’s happened over the last few years, I hope we are witnessing the end of single-vendor-backed open-source projects. Many of these companies see open source as a vehicle to drive the adoption of their software and then switch to a prohibitive license once the software starts to reach massive adoption through a cloud provider. This has happened more times in recent years than I care to count. There are only a few companies that can reliably provide open-source software without hurting their own bottom line. These are the software giants like AWS, Google, and Microsoft. They can comfortably contribute to open-source software and collaborate with companies who specialize in some of the more niche use cases. They may not have been able to cover these niches on their own, but with the help of the community they can. And at the end of the day, even the smaller companies who provide managed offerings need somewhere to host them, right? I see a bright future for Valkey, the open-source fork of Redis. It’s starting off right in the Linux Foundation, where companies contribute freely without fear of a single company dictating the direction of a project.


The Rise of Augmented Analytics: Combining AI with BI for Enhanced Data Insights

Traditionally, data analysis has been the domain of highly skilled data scientists and BI experts. Augmented analytics changes this dynamic by empowering non-technical users to participate actively in the insight-generation process. Augmented analytics automates complex tasks and presents information in a user-friendly format so that employees across departments can easily access, explore, and derive insights from their organization’s data. This will both streamline and simplify business processes – no more worrying about how many business accounts you should have, whether or not your financing option offers optimal interest rates, or if adopting new standards of technologies is worth it in the long run. The relevant teams can easily use data to find the right information without waiting for an expert to analyze it. ... One of the most significant challenges faced by businesses is the shortage of data science expertise. Augmented analytics helps alleviate this bottleneck by reducing the reliance on specialized data professionals.


The 4 Types Of Generative AI Transforming Our World

Generative Adversarial Networks (GANs) emerged in 2014 and quickly became one of the most effective models for generating synthetic content, both text and images. The basic principle involves pitting two different algorithms against each other. One is known as the ‘generator,’ and the other is known as the ‘discriminator,’ and both are given the task of getting better and better at out-foxing each other. ... Neural Radiance Fields (NeRFs) are the newest technology covered here, only emerging onto the scene in 2020. Unlike the other generative technologies, they are specifically used to create representations of 3D objects using deep learning. This means creating an aspect of an image that can't be seen by the ‘camera’ ... One of the latest advancements in the field of generative AI is the development of hybrid models, which combine various techniques to create innovative content generation systems. These models draw on the strengths of different approaches, such as blending the adversarial training of Generative Adversarial Networks (GANs) with the iterative denoising of diffusion models to produce more refined and realistic outputs.


Closing the cybersecurity skills gap with upskilling programs

The power of upskilling is undeniable but putting it into practice with employees is the real challenge. The top reason organizations struggle to implement successful upskilling programs has not changed in the last three years of this study: lack of time. Despite clear barriers, organizations can unlock upskilling engagement with a culture of continuous learning. The first step is to identify existing skills in order to see the gaps. Leaders need to stay engaged in this process: only 33% of executives completely understand the skills their IT teams need and 68% of technologists say leadership at their organization is not aware of a tech skills gap. When it comes to discovering what drives employees to upskill, a new #1 motivation to participate emerged this year: stronger job security and improved confidence. This is yet another proof point in upskilling contributing to higher employee engagement that can drive performance and innovation. With 78% of organizations abandoning projects partway through because of not having enough employees with the right tech skills, there is no time to waste in closing the gap.


Are Enterprises Overconfident About Cybersecurity Readiness?

Comparing statistics for global companies and their overall state of readiness, between the years 2022-2023 and 2023-2024, shows certain surprising changes. The number of Mature companies dropped to 3% this year from 15% in the previous year. The number of Progressive companies dropped by 4% this year; however, the number of Formative companies increased by 13%. In addition, more Beginners were included in the index this year. Raymond Janse van Rensburg, VP, specialists and solutions engineering, APJC, Cisco, said this should not be considered as a static comparison, as the benchmark was done with “a lot more capabilities” this year. "Digitization continues to accelerate around us. We see that there is pressure on organizations to transform. But with that, the attack surface is changing, and there is a lot of technology advancement that is taking place as well," said van Rensburg. "We need to keep pace with change, with the transformation of the organizations, and also to keep pace with change for cybersecurity and the capabilities we bring in across the industry to provide the necessary protections."


Risk orchestration: Addressing the Misconceptions – Hub Platform vs True Orchestration Performance

A true risk orchestration platform synchronises workflows and data and enables customisable journeys. Waterfalling reduces friction and saves costs, ensuring appropriate checks are only applied as necessary. Genuine customers are fast-tracked, while others are routed for additional due diligence. The technology unites complex systems and processes to create a single view of customer risk, a unified risk score and automated decision making. ... In true risk orchestration, all of these processes are seamlessly synchronised to deliver a single view of customer risk. Informed decision making is made simpler, smoother, more accurate and efficient. The impact of this is felt not only in customer satisfaction, which can be vastly enhanced by the speed at which individuals are authenticated and served, but in the overall effectiveness of compliance processes too, in terms of how much time and resource is spent. To that end, there’s a genuine bottom-line benefit to be realised through effective risk orchestration of your compliance processes. 


If Software Quality Is Everybody’s Responsibility, So Is Failure

Quality failures tend to stem from broader organizational and leadership failures. Scapegoating testers for systemic issues is counterproductive. It obscures the real problems and stands in the way of meaningful solutions to quality failings. ... To build ownership, organizations need to shift testing upstream, integrating it earlier into requirements planning, design reviews, and development processes. It also requires modernizing the testing practice itself, utilizing the full range of innovation available: from test automation, shift-left testing, and service virtualization, to risk-based test case generation, modeling, and generative AI. With a shared understanding of who owns quality, governance policies can better balance competing demands around cost, schedule, capabilities, and release readiness. Testing insights will inform smarter tradeoffs, avoiding quality failures and the finger-pointing that today follows them. This future state reduces the likelihood of failures — but also acknowledges that some failures will still occur despite best efforts. In these cases, organizations must have a governance model to transparently identify root causes across teams, learn from them, and prevent recurrence.


Navigating personal liability: post data-breach recommendations for CISOs

The single most important guidance that you should follow is to involve counsel promptly and frequently. Upon your first knowledge of a claimed data breach, you should promptly determine who your organization’s legal counsel is for such matters and contact those lawyers. ... It can be tempting for CSOs and CISOs to take the reins in data breach incidents, given their technical expertise or sense of personal responsibilities. However, this can lead to unintended legal complications. In the aftermath of a data breach, it’s critical to let your organization’s legal counsel guide decision-making processes. They can ensure that the response to the data breach complies with applicable laws and that both communication and remediation efforts are handled appropriately to minimize potential liability. ... Although it’s rare to face personal liability or criminal charges, there can be situations where it could be a real or feared risk. Independent legal advice can provide guidance tailored to your specific situation, to identify where your interests may be different from those of your organization, to allay your concerns, all of which can be protected under attorney-client privilege.


Change Healthcare Admits to Making Ransom Payment to Protect Patient Data, Discloses That Hacker Broke in Days Before Attack

Even though a ransom payment was made, the attack was nevertheless devastating to the processing of health insurance claims and payment information across the country. The UnitedHealth Group update indicates that 99% of pharmacies are now back to normal ability to process claims, and that its internal payment processing capacity is back to 86% of normal. 80% of the function of its major platforms and products has also been restored and the group expects a full recovery in a matter of weeks. It also says that medical claims are back to near-normal processing levels, but that it is working directly with some smaller providers that remain hampered by the attack fallout and is seeking to set up alternate methods of submission for them. The ransom payment had been widely reported prior to the company’s admission, due to an otherwise inexplicable Bitcoin transaction (equaling about $22 million) traced to a wallet known to be associated with AlphV. The RaaS provider also opted to pull what appeared to be an “exit scam” after this payment rolled in, leading to the bag-holding affiliate taking to dark web forums to accuse the group of theft.


Top 10 barriers to strategic IT success

The State of the CIO Study found that staff and skills shortages was the No. 1 challenge cited by CIOs when asked what most often forces them to redirect resources away from strategic tasks, with 59% saying as much. “Talent — or the lack of it — is a huge, huge issue, and when we look at the demographics, we don’t see that changing in the future,” says Chirajeet (CJ) Sengupta, managing partner at Everest Group, a global research firm. Sengupta acknowledges that layoffs at the tech giants in the past year or so eased the talent crunch a bit — but only for other big companies who could offer highly-competitive salaries to those laid-off workers. And while CIOs are working to train internal candidates within IT and the business units for tough-to-fill tech jobs or are using contractors to fill in staffing gaps, Sengupta says those practices create talent challenges, too. Upskilling takes time, and contracted workers aren’t usually as close to the business as employees. Now there’s news that the competition for tech workers may heat up even more this year.



Quote for the day:

"'To do great things is difficult; but to command great things is more difficult.'' -- Friedrich Nietzsche

Daily Tech Digest - April 28, 2024

How RPA vendors aim to remain relevant in a world of AI agents

Craig Le Clair, principal analyst at Forrester, sees RPA platforms as being ripe for expansion to support autonomous agents and generative AI as their use cases grow. In fact, he anticipates RPA platforms morphing into all-around toolsets for automation — toolsets that help deploy RPA in addition to related generative AI technologies. “RPA platforms have the architecture to manage thousands of task automations and this bodes well for central management of AI agents,” he said. “Thousands of companies are well established with RPA platforms and will be open to using them for generative AI-infused agents. RPA has grown in part thanks to its ability to integrate easily with existing work patterns, through UI integration, and this will remain valuable for more intelligent agents going forward.” UiPath is already beginning to take steps in this direction with a new capability, Context Grounding, that entered preview earlier in the month. As Enslin explained it to me, Context Grounding is designed to improve the accuracy of generative AI models — both first- and third-party — by converting business data those models might draw on into an “optimized” format that’s easier to index and search.


Cyberattack Gold: SBOMs Offer an Easy Census of Vulnerable Software

Producing a detailed list of software components in an application can have offensive implications, Pesce argues. In his presentation, he will show that SBOMs have enough information to allow attackers to search for specific CVEs in a database of SBOMs and find an application that is likely vulnerable. Even better for attackers, SBOMs will also list other components and utilities on the device that the attacker could use for "living off the land" post-compromise, he says. "Once I've compromised a device ... an SBOM can tell me what the device manufacturer left behind on that device that I could potentially use as tools to start probing other networks," he says. The minimum baseline for SBOM data fields include the supplier, the component name and version, dependency relationships, and a timestamp of when the information was last updated, according to the US Department of Commerce guidelines. In fact, a comprehensive database of SBOMs could be used in a manner similar to the Shodan census of the Internet: Defenders could use it to see their exposure, but attackers could use it to determine what applications might be vulnerable to a particular vulnerability, Pesce says.


Computer scientists unveil novel attacks on cybersecurity

In this new study, researchers leverage modern predictors' utilization of a Path History Register (PHR) to index prediction tables. The PHR records the addresses and precise order of the last 194 taken branches in recent Intel architectures. With innovative techniques for capturing the PHR, the researchers demonstrate the ability to not only capture the most recent outcomes but also every branch outcome in sequential order. Remarkably, they uncover the global ordering of all branches. Despite the PHR typically retaining the most recent 194 branches, the researchers present an advanced technique to recover a significantly longer history. "We successfully captured sequences of tens of thousands of branches in precise order, utilizing this method to leak secret images during processing by the widely used image library, libjpeg," said Hosein Yavarzadeh, a UC San Diego Computer Science and Engineering Department Ph.D. student and lead author of the paper. The researchers also introduce an exceptionally precise Spectre-style poisoning attack, enabling attackers to induce intricate patterns of branch mispredictions within victim code. 


Bank CIO: We don't need AI whizzes, we need critical thinkers to challenge AI

"We realize having fantastic brains and results isn't necessarily as good as someone that is willing to have critical thinking and give their own perspectives on what AI and generative AI gives you back in terms of recommendations," he says. "We want people that have the emotional and self-awareness to go, 'hmm, this doesn't feel quite right, I'm brave enough to have a conversation with someone, to make sure there's a human in the loop.'" ... In working closely with AI, Mai discovered an interesting twist in human nature: People tend to disregard their own judgement and diligence as they grow dependent on these systems. "As an example, we found that some humans become lazy -- they prompt something, and then decide, 'ah that sounds like a really good response,' and send it on." When Mai senses that level of over-reliance on AI, "I'll march them into my office, saying 'I'm paying you for your perspective, not a prompt and a response in AI that you're going to get me to read. Just taking the results and giving it back to me is not what I'm looking for, I'm expecting your critical thought."


Scientists Uncover Surprising Reversal in Quantum Systems

Things get particularly interesting if, in addition, the particles in such a system interact, meaning that they attract or repel each other, like electrons in solids. Studying topology and interactions together in solids, however, is extremely difficult. A team of researchers at ETH led by Tilman Esslinger has now managed to detect topological effects in an artificial solid, in which the interactions can be switched on or off using magnetic fields. Their results, which have just been published in the scientific journal Science, could be used in quantum technologies in the future. ... Surprisingly, the atoms didn’t simply stop at the wall, but suddenly turned around. The screw was thus moving backward, although it kept being turned clockwise. Esslinger and his team explain this return by the two doughnut topologies that exist in the lattice – one with a clockwise-turning doughnut and another one that turns in the opposite direction. At the wall, the atoms can change from one topology to the other, thus inverting their direction of motion. Now the researchers switched on a repulsive interaction between the atoms and watched what happened. 


Talos IR trends: BEC attacks surge, while weaknesses in MFA persist

Within BEC attacks, adversaries will send phishing emails appearing to be from a known or reputable source making a valid request, such as updating payroll direct deposit information. BEC attacks can have many motivations, often financially driven, aimed at tricking organizations into transferring funds or sensitive information to malicious actors. BEC offers adversaries the advantage of impersonating trusted contacts to facilitate internal spearphishing attacks that can bypass traditional external defenses and increase the likelihood of deception, widespread malware infections and data theft. In one engagement, adversaries performed a password-spraying attack and MFA exhaustion attacks against several employee accounts. There was a lack of proper MFA implementation across all the impacted accounts, leading to the adversaries gaining access to at least two accounts using single-factor authentication. The organization detected and disrupted the attack before adversaries could further their access or perform additional post-compromise activities.


Creating a Culture of Belonging: Addressing Workplace Bias in Tech

Paul Wallenberg, senior manager of technology services at technology staffing, recruiting, and culture firm LaSalle Network, believes the formation of employee resource groups (ERGs) can help HR departments tackle unconscious bias and develop educational structures for tackling those issues. "The challenge with navigating and correcting microaggressions is people that are doing them often don't know they're doing it," he said. "There's a subtlety and an indirectness to them that people may not understand." ERGs can help dismantle the constructs those people have been working within for their entire careers — but Wallenberg notes there is no "one size fits all" solution. Hannah Johnson, senior vice president for tech talent programs at CompTIA, agrees, noting that the important factor is that any DEI efforts must be nuanced to target what's important to that organization. She urges leadership to prioritize transparency by openly communicating with their organization about issues, plans, and anticipated outcomes.


Building AI With MongoDB: Integrating Vector Search And Cohere to Build Frontier Enterprise Apps

It is in the realm of embedding where Cohere has made a host of recent advances. Described as “AI for language understanding,” Embed is Cohere’s leading text representation language model. Cohere offers both English and multilingual embedding models, and gives users the ability to specify the type of data they are computing an embedding for (e.g., search document, search query). The result is embeddings that improve the accuracy of search results for traditional enterprise search or retrieval-augmented generation. One challenge developers faced using Embed was that documents had to be passed one by one to the model endpoint, limiting throughput when dealing with larger data sets. To address that challenge and improve developer experience, Cohere has recently announced its new Embed Jobs endpoint. Now entire data sets can be passed in one operation to the model, and embedded outputs can be more easily ingested back into your storage systems. Additionally, with only a few lines of code, Rerank 3 can be added at the final stage of search systems to improve accuracy. 


What is a technology audit and why it's important

According to McKinsey, companies pay whopping 40 percent "tech debt tax" due to delayed digital transformation decisions. It's acceptable of course in some situations, when such decision is a conscious choice dictated by business needs, circumstances or regulations. Unfortunately, in many cases business owners and startup founders even don't realize how much they are paying to service their technical debt. To reveal such hidden technical debt, a detailed review of technology aspects of your business is required. When you are working on a software product for a while, you may start asking yourself various questions:Are we following all the necessary best practices? Is our architecture good scalable and secure? Are we missing security vulnerabilities? Do we really have all the software assets developed by third-party vendors? Additionally, technical debt can have it's internal "guardians" who may want to maintain the status quo for various reasons. To mitigate risks described above and ensure efficient operations, proper security and optimal usage of cloud services, an external technology audit is one of the best options.


Steer Between Debt and Delay With Platform Engineering

While EA pros often hear cries of “ivory tower,” “out of touch,” and so forth, such criticisms overlook the fundamental point: Enterprises put architecture in place to deliver business value, such as the value of controlling the risks that emerge from technical debt and sprawl—security breaches, outages, unsustainable cost structures, and so forth. And so the EA team started to try to control the morass that distributed computing had gotten itself into through standardizing product choices (Oracle and SQL Server are OK, but let’s sunset Informix) and reviewing design choices submitted by project (later product) teams. The EA reviews started to encompass a broader and wider array of questions. Lengthy intake forms and checklists became typical, covering everything from application design patterns to persistence approaches to monitoring to backup and more. The notorious "Architecture Review Board" became a requirement for all projects. Review cycles became more protracted and painful, and more than a few business organizations got fed up and just went shadow until their shadow systems failed, at which point central IT would be pressured to take over the mess. 



Quote for the day:

"Things turn out best for the people who make the best of the way things turn out." -- John Wooden

Daily Tech Digest - April 27, 2024

AI twins and the digital revolution

The digital twin is designed to work across a single base station with a few mobile devices all the way up to hundreds of base stations with thousands of devices. “I would say the RF propagation piece is perhaps one of the most exciting areas apart from the data collection,” Vasishta noted. “The ability to simulate at scale real antenna, including interface interference and other elements, data is where we’ve really spent the most time to make sure that it is an accurate implementation”. The platform also includes a software-defined, full RAN stack to allow researchers and members to customise, programme and test 6G network components in real time. Vendors, such as Nokia, can bring their own RAN stack to the platform, but Nvidia’s open RAN compliant stack is provided. Vasishta added users of the research platform can collect data from their digital twin within their channel model, which allows them to train for optimisation. “It now allows you to use AI and machine learning in conjunction with a digital twin to fully simulate an environment and create site specific channel models so you can always have best connectivity or lowest power consumption, for instance,” he said.


The temptation of AI as a service

AWS has introduced a new feature aimed at becoming the prime hub for companies’ custom generative AI models. The new offering, Custom Model Import, launched on the Amazon Bedrock platform (enterprise-focused suite of AWS) and provides enterprises with infrastructure to host and fine-tune their in-house AI intellectual property as fully managed sets of APIs. This move aligns with increasing enterprise demand for tailored AI solutions. It also offers tools to expand model knowledge, fine-tune performance, and mitigate bias. All of these are needed to drive AI for value without increasing the risk of using AI. In the case of AWS, the Custom Model Import allows model integrations into Amazon Bedrock, where they join other models, such as Meta’s Llama 3 or Anthropic’s Claude 3. This provides AI users the advantage of managing their models centrally alongside established workflows already in place on Bedrock. Moreover, AWS has announced enhancements to the Titan suite of AI models. The Titan Image Generator, which translates text descriptions into images, is shifting to general availability.


Overwhelmed? 6 ways to stop small stresses at work from becoming big problems

"Someone once used the analogy that you have crystal balls and bouncy balls. If you drop your crystal ball, it shatters, and you'll never be able to get it back. Whereas if you drop your bouncy ball, it will bounce back." "I think you need to work out the crystal balls to prioritize because if you drop that ball, it's gone. For me, it always helps to take stuff off the priority list. And I think that approach helps with work/life balance. Sometimes, it's important to choose." ... "If we have a small problem in one store, and we pick up that's prevalent in all stores, collectively the impact is significant. So, that's why I get to the root cause as quickly as possible."  "And then you understand what's going on rather than just trying to stick a plaster over what appears to be a cut, but is something quite a bit deeper underneath." ... "If you look at something in darkness, it can feel pretty overwhelming quickly. So, giving a problem focus and attention, and getting some people around it, tends to put the issue in perspective."  ... "It's nice to have someone who can point out to you, 'You're ignoring that itch, why don't you do something about it?' I've found it's good to speak with an expert with a different perspective."


15 Characteristics of (a Healthy) Organizational Culture

Shared common values refer to the fundamental beliefs and principles that an organization adopts as its foundation. These values act as a compass, guiding behaviors, decision-making, and interactions both within the organization and with external stakeholders. They help create a cohesive culture by aligning employees’ actions with the company’s core mission and vision. ... A clear purpose and direction align the organization’s efforts and goals. This clarity helps unite the team, focusing their efforts on achieving specific objectives and guiding strategic planning and daily operations. ... Transparent and regular communication supports openly sharing information and feedback throughout the organization. This practice fosters trust, helps in early identification of issues, encourages collaboration, and ensures that everyone is informed and aligned with the organization’s goals. ... Collaboration and teamwork underpin a cooperative environment where groups work together to achieve collective objectives. This approach enhances problem-solving, innovation, and efficiency, while also building a supportive work environment.


Palo Alto Networks’ CTO on why machine learning is revolutionizing SOC performance

When it comes to data center security, you have to do both. You have to keep them out. And that’s the role of traditional cybersecurity. So network security, including, of course, the security between the data center and Ethernet, internal security for segmentation. It includes endpoint security for making sure that vulnerabilities aren’t being exploited and malware isn’t running. It includes identity and access management. Or even privileged access management (PAM), which we don’t do. We don’t do identity access or PAM. It includes many different things. This is about keeping them out or not letting them walk inside laterally. And then the second part of it which, which goes to your question, is now let’s assume they are inside and all defenses have failed. It’s the role of the SOC to look for them. We call it hunting, the hunting function in the SOC. How do we do that? You need machine learning, not [large language models] LLMs, or GPT, but real, traditional machine learning, to do both, both to keep them out and also both to find them if they’re already inside. So we can talk about both and how we use machine learning here and how we use machine learning there.


From hyperscale to hybrid: unlocking the potential of the cloud

To optimize their cloud adoption strategies, and ensure they architect the best fits for their needs, organizations will first need to undertake detailed assessments of their workloads to determine which cloud combinations to go for. Weighing up which cloud options are most appropriate for which workloads isn’t always an easy task. Ideally, organizations should utilize a cloud adoption framework to help scope out the overall security and business drivers that will influence their cloud strategy decision-making. Enabling organizations to identify and mitigate risks and ensure compliance as they move ahead, these frameworks make it easier to proceed confidently with their cloud adoption plans. Since every infrastructure strategy will have unique requirements that include tailored security measures, leveraging the expertise of cloud security professionals will also prove invaluable for ensuring appropriate security measures are in place. Similarly, organizations will be able to gain a deeper understanding of how best to orchestrate their on-premises, private, and public clouds in a unified and cost/performance-optimized manner.


No Fear, AI is Here: How to Harness AI for Social Good

We must proactively think about how our organizations can responsibly leverage AI for good. Our role is to offer our teams the support and guidance required to harness AI’s full power in ways big and small to inspire positive change, ensuring fear doesn’t override optimism. While AI has an undeniable advantage when it comes to its ability to outperform, it cannot replace the power of human creativity, perspectives, and deep insight. ... We only have a finite amount of time to address climate change and related issues such as poverty and inequity. To get there, we’re going to have to try. And then try again. And again. Though it will be an uphill climb, AI can help us climb faster -- and explore as many options as we possibly can, as quickly as we can -- if we use it responsibly. The key is for tech impact leaders to bring forward a human-centric perspective to their company’s investments and use of AI technology, ensuring that their strategies don’t lead to unintended consequences for employment. Don’t let fear prevent you from getting all the help you can from the most powerful technology available. Your team, and the world, need you to be fearless.


Data for AI — 3 Vital Tactics to Get Your Organization Through the AI PoC Chasm

We now have the opportunity to automate manual heavy lifting in data prep. AI models can be trained to detect and strip out sensitive data, identify anomalies, infer records of source, determine schemas, eliminate duplication, and crawl over data to detect bias. There is an explosion of new services and tools available to take the grunt work out of data prep and keep the data bar high. By automating these labor-intensive tasks, AI empowers organizations to accelerate data preparation, minimize errors, and free up valuable human resources to focus on higher-value activities, ultimately enhancing the efficiency and effectiveness of AI initiatives. ... AI is being experimented with and adopted broadly across organizations. With so much activity and interest, it is difficult to centralize work, and often centralization creates bottlenecks that slow down innovation. Encouraging decentralization and autonomy in delivering AI use cases is beneficial as it increases capacity for innovation across many teams, and embeds work into the business with a focus on specific business priorities. 


Blockchain Distributed Ledger Market 2024-2032

Organizations are leveraging blockchain's decentralized and immutable ledger capabilities to enhance transparency, security, and efficiency in their operations. Secondly, the growing demand for secure and transparent transactions, coupled with the rising concerns over data privacy and cybersecurity, is driving the adoption of blockchain distributed ledger solutions. Businesses and consumers alike are increasingly turning to blockchain technology to safeguard their data and assets against cyber threats and fraudulent activities. Moreover, the proliferation of digitalization and the internet of things (IoT) is further driving market growth by creating a demand for reliable and tamper-proof data storage and transmission systems. Blockchain's ability to provide a decentralized and verifiable record of transactions makes it well-suited for IoT applications, such as supply chain tracking, smart contracts, and secure data sharing. Additionally, the emergence of regulatory frameworks and standards for blockchain technology adoption is providing a favorable environment for market expansion, as it instills confidence among businesses and investors regarding compliance and legal aspects.


The real cost of cloud banking

Although compute and memory costs have come down and capacities have gone up over the years, the fact is an inefficient piece of software will still cost you more today than a well-designed, optimised one. Before it was a simple case of running your software and checking how much memory you were using. With cloud, the pricing may be transparent, but the options available and cost calculation are much more complex. Such is the complexity involved with cloud costs that many banks bring in specialists to help them optimise their spend. In fact, it’s not only banks, but banking software providers too. Cloud cost optimisation is a fine art that requires time and expertise to fully understand. It would be easy to blame developers, but I’ve never seen business requirements that state that applications should minimise their memory use or use the least expensive type of storage. I’ve been in the position of “the business” needing to make decisions on requirements for storage options, and these decisions aren’t easy, even for someone with a technical background. In defence of cloud providers, their pricing is transparent. 



Quote for the day:

“The road to success and the road to failure are almost exactly the same.” -- Colin R. Davis

Daily Tech Digest - April 26, 2024

Counting the Cost: The Price of Security Neglect

In the perfect scenario, the benefits of a new security solution will reduce the risk of a cyberattack. But, it’s important to invest with the right security vendor. Any time a vendor has access to a company’s systems and data, that company must assess whether the vendor’s security measures are sufficient. The recent Okta breach highlights the significant repercussions of a security vendor breach on its customers. Okta serves as an identity provider for many organizations, enabling single sign-on (SSO). An attacker gaining access to Okta’s environment could potentially compromise user accounts of Okta customers. Without additional access protection layers, customers may become vulnerable to hackers aiming to steal data, deploy malware, or carry out other malicious activities. When evaluating the privacy risks of security investments, it’s important to consider an organization’s security track record and certification history. ... Security and privacy leaders can bolster their case for additional investments by highlighting costly data breaches, and can tilt the scale in their favor by seeking solutions with strong records in security, privacy, and compliance.


Is Your Test Suite Brittle? Maybe It’s Too DRY

DRY in test code often presents a similar dilemma. While excessive duplication can make tests lengthy and difficult to maintain, misapplying DRY can lead to brittle test suites. Does this suggest that the test code warrants more duplication than the application code? A common solution to brittle tests is to use the DAMP acronym to describe how tests should be written. DAMP stands for "Descriptive and Meaningful Phrases" or "Don’t Abstract Methods Prematurely." Another acronym (we love a good acronym!) is WET: "Write Everything Twice," "Write Every Time," "We Enjoy Typing," or "Waste Everyone’s Time." The literal definition of DAMP has good intention - descriptive, meaningful phrases and knowing the right time to extract methods are essential when writing software. However, in a more general sense, DAMP and WET are opposites of DRY. The idea can be summarized as follows: Prefer more duplication in tests than you would in application code. However, the same concerns of readability and maintainability exist in application code as in test code. Duplication of concepts causes the same problems of maintainability in test code as in application code.


PCI Launches Payment Card Cybersecurity Effort in the Middle East

The PCI SSC plans to work closely with any organization that handles payments within the Middle East payment ecosystem, with a focus on security, says Nitin Bhatnagar, PCI Security Standards Council regional director for India and South Asia, who will now also oversee efforts in the Middle East. "Cyberattacks and data breaches on payment infrastructure are a global problem," he says. "Threats such as malware, ransomware, and phishing attempts continue to increase the risk of security breaches. Overall, there is a need for a mindset change." The push comes as the payment industry itself faces significant changes, with alternatives to traditional payment cards taking off, and as financial fraud has grown in the Middle East. ... The Middle East is one region where the changes are most pronounced. Middle East consumers prefer digital wallets to cards, 60% to 27%, as their most preferred method of payment, while consumers in the Asia-Pacific region slightly prefer cards, 43% to 38%, according to an August 2021 report by consultancy McKinsey & Company.


4 ways connected cars are revolutionising transportation

Connected vehicles epitomize the convergence of mobility and data-driven technology, heralding a new era of transportation innovation. As cars evolve into sophisticated digital platforms, the significance of data management and storage intensifies. The storage industry must remain agile, delivering solutions that cater to the evolving needs of the automotive sector. By embracing connectivity and harnessing data effectively, stakeholders can unlock new levels of safety, efficiency, and innovation in modern transportation. ... Looking ahead, connected cars are poised to transform transportation even further. As vehicles become more autonomous and interconnected, the possibilities for innovation are limitless. Autonomous driving technologies will redefine personal mobility, enabling efficient and safe transportation solutions. Data-driven services will revolutionise vehicle ownership, offering personalised experiences tailored to individual preferences. Furthermore, the integration of connected vehicles with smart cities will pave the way for sustainable and efficient urban transportation networks.


Looking outside: How to protect against non-Windows network vulnerabilities

Security administrators running Microsoft systems spend a lot of time patching Windows components, but it’s also critical to ensure that you place your software review resources appropriately – there’s more out there to worry about than the latest Patch Tuesday. ... Review the security and patching status of your edge, VPN, remote access, and endpoint security. Each of these endpoint software has been used as an entryway into many governments and corporate networks. Be prepared to immediately patch and or disable any of these software tools at a moment’s notice should the need arise. Ensure that you have a team dedicated to identifying and tracking resources to help alert you to potential vulnerabilities and attacks. Resources such as CISA can keep you alerted as can making sure you are signed up for various security and vendor alerts as well as having staff that are aware of the various security discussions online. These edge devices and software should always be kept up to date and you should review life cycle windows as well as newer technology and releases that may decrease the number of emergency patching sessions your Edge team finds themselves in.


Application Delivery Controllers: A Key to App Modernization

As the infrastructure running our applications has grown more complex, the supporting systems have evolved to be more sophisticated. Load balancers, for example, have been largely superseded by application delivery controllers (ADCs). These devices are usually placed in a data center between the firewall and one or more application servers, an area known as the demilitarized zone (DMZ). While first-generation ADCs primarily handled application acceleration and load balancing between servers, modern enterprise ADCs have considerably expanded capabilities and have evolved into feature-rich platforms. Modern ADCs include such capabilities as traffic shaping, SSL/TLS offloading, web application firewalls (WAFs), DNS, reverse proxies, security analytics, observability and more. They have also evolved from pure hardware form factors to a mixture of hardware and software options. One leader of this evolution is NetScaler, which started more than 20 years ago as a load balancer. In the late 1990s and early 2000s, it handled the majority of internet traffic. 


Curbing shadow AI with calculated generative AI deployment

IT leaders countered by locking down shadow IT or making uneasy peace with employees consuming their preferred applications and compute resources. Sometimes they did both. Meanwhile, another unseemly trend unfolded, first slowly, then all at once. Cloud consumption became unwieldy and costly, with IT shooting itself in the foot with misconfigurations and overprovisioning among other implementation errors. As they often do when investment is measured versus business value, IT leaders began looking for ways to reduce or optimize cloud spending. Rebalancing IT workloads became a popular course correction as organizations realized applications may run better on premises or in other clouds. With cloud vendors backtracking on data egress fees, more IT leaders have begun reevaluating their positions. Make no mistake: The public cloud remains a fine environment for testing and deploying applications quickly and scaling them rapidly to meet demand. But it also makes organizations susceptible to unauthorized workloads. The growing democratization of AI capabilities is an IT leader’s governance nightmare. 


CIOs eager to scale AI despite difficulty demonstrating ROI, survey finds

“Today’s CIOs are working in a tornado of innovation. After years of IT expanding into non-traditional responsibilities, we’re now seeing how AI is forcing CIOs back to their core mandate,” Ken Wong, president of Lenovo’s solutions and services group, said in a statement. There is a sense of urgency to leverage AI effectively, but adoption speed and security challenges are hindering efforts. Despite the enthusiasm for AI’s transformative potential, which 80% of CIOs surveyed believe will significantly impact their businesses, the path to integration is not without its challenges. Notably, large portions of organizations are not prepared to integrate AI swiftly, which impacts IT’s ability to scale these solutions. ... IT leaders also face the ongoing challenge of demonstrating and calculating the return on investment (ROI) of technology initiatives. The Lenovo survey found that 61% of CIOs find it extremely challenging to prove the ROI of their tech investments, with 42% not expecting positive ROI from AI projects within the next year. One of the main difficulties is calculating ROI to convince CFOs to approve budgets, and this challenge is also present when considering AI adoption, according to Abhishek Gupta, CIO of Dish TV. 


AI Bias and the Dangers It Poses for the World of Cybersecurity

Without careful monitoring, these biases could delay threat detection, resulting in data leakage. For this reason, companies combine AI’s power with human intelligence to reduce the bias factor shown by AI. The empathy element and moral compass of human thinking often prevent AI systems from making decisions that could otherwise leave a business vulnerable. ... The opposite could also occur, as AI could label a non-threat as malicious activity. This could lead to a series of false positives that cannot even be detected from within the company. ... While some might argue that this is a good thing because supposedly “the algorithm works,” it could also lead to alert fatigue. AI threat detection systems were added to ease the workload in the human department, reducing the number of alerts. However, the constant red flags could cause more work for human security providers, giving them more tickets to solve than they originally had. This could lead to employee fatigue and human error and take away the attention from actual threats that could impact security.


The Peril of Badly Secured Network Edge Devices

The biggest risks involved anyone using internet-exposed Cisco Adaptive Security Appliance devices, who were five times more likely than non-ASA users to file a claim. Users of internet-exposed Fortinet devices were twice as likely to file a claim. Another risk comes in the form of Remote Desktop Protocol. Organizations with internet-exposed RDP filed 2.5 times as many claims as organizations without it, Coalition said. Mass scanning by attackers, including initial access brokers, to detect and exploit poorly protected RDP connections remains rampant. The sheer quantity of new vulnerabilities coming to light underscores the ongoing risks network edge devices pose. ... Likewise for Cisco hardware: "Several critical vulnerabilities impacting Cisco ASA devices were discovered in 2023, likely contributing to the increased relative frequency," Coalition said. In many cases, organizations fail to patch these vulnerabilities, leaving them at increased risk, including by attackers targeting the Cisco AnyConnect vulnerability, designated as CVE-2020-3259, which the vendor first disclosed in May 2020.



Quote for the day:

"Disagree and commit is a really important principle that saves a lot of arguing." -- Jeff Bezos

Daily Tech Digest - April 25, 2024

The rise in CISO job dissatisfaction – what’s wrong and how can it be fixed?

“The reason for dissatisfaction is the lack of executive management support,” says Nikolay Chernavsky, CISO of ISSQUARED, which provides managed IT and security services as well as software products. He says he hears CISOs voice frustrations when their views on required security measures and acceptable risk are dismissed; when the board and CEO don’t define their positions on those issues; or when those leaders don’t recognize the CISOs work in reducing risk — especially as the CISO faces more accountability and liability. Understandably, CISOs shy away from interview requests to publicly share their frustrations on these issues. However, the IANS Research report speaks to these points, noting, for example, that only 36% of CISOs said they have clear guidance from their board on their risk tolerance. Adding to these issues today is the liability that CISOs now face with the new US Securities and Exchange Commission (SEC) cyber disclosure rules as well as other regulatory and legal requirements. That increased liability is coupled with the fact that many CISOs are not covered by their organization’s directors and officers (D&O) liability insurance.


How CIOs align with CFOs to build RevOps

CIOs who transition IT from being a cost center to being a driver of innovation, transformation, and new revenues, can become the leaders that the new economy needs. “We used to say that business runs technology,” says David Kadio-Morokro, EY Americas financial services innovation leader. “You tell me what you want, and I’ll code it and support you.” Now it’s switched, he says. “I really believe technology drives the business, because it’s going to impact business strategy and how the business survives,” he adds, and gen AI will force companies to rethink the value of their organizations to customers. “Developing and envisioning an AI-driven strategy is absolutely part of the equation,” he says. “And the CIO has this role of enabling these components, and they need to be part of the conversation and be able to drive that vision for the organization.” The CIO is also in a position to help the CFO evolve, too. CFOs are traditionally risk averse and expect certainty and accuracy from their technology. Not only is gen AI still a new and experimental technology that’s evolving quickly but is, by its very nature, probabilistic and nondeterministic.


Do you need to repatriate from the cloud?

It should be no surprise that repatriation has gained this hype. Cloud, which grew to maturity during an economic boom, is for the first time under downward pressure as companies seek to reduce spending. Amazon, Google, Microsoft, and other cloud providers have feasted on their customers’ willingness to spend. But the willingness has been tempered now by budget cuts. ... Transitioning back to on-premises is a heavy lift, and one that is hard to rescind should things go badly. And savings is yet to be seen until after the transition is complete. Switching to WebAssembly-powered serverless functions, in contrast, is less expensive and less risky. Because such functions can run inside of Kubernetes, the savings thesis can be tested merely by carving off a few representative services, rewriting them, and analyzing the results. Those already invested in a microservice-style architecture are already well setup to rebuild just fragments of a multi-service application. Similarly, those invested in event processing chains like data transformation pipelines will also find it easy to identify a step or two in a sequence that can become the testbed for experimentation.


ONDC’s blockchain is a Made-in-India visioning of global digital public infrastructures

ONDC Confidex is a transformative shift towards decentralised trust. Anchored in the blockchain’s nativity, this shift promotes a value exchange network of networks that enables the reuse of continuously assured data that is traceable, reliable, secure, transparent and immutable. Confidex provides a transparent ledger that tracks every phase in the supply chain from production to end consumption. This level of detail not only fosters trust but also aligns with the broader vision of creating a global standard for ensuring product history’s authenticity—a core aspect of continuous data assurance. In the realm of digital transactions, the reliability of data underpins the foundation of trust. Confidex enables data certainty, making each transaction verifiable and immutable. This paves the way for friction-free interactions within digital marketplaces, ensuring that every piece of data holds its integrity from the point of creation to consumption. The digital economy is plagued with issues of forgery and duplication. Confidex addresses this head-on by creating unique digital records that are impossible to replicate or alter. 


How will AI-driven solutions affect the business landscape?

Redmond believes that the tech will quickly become embedded in normal business practice. “We won’t even think about asking gen AI to draft emails or documents or to generate images for our presentations.” He’s also looking forward to seeing how AI-driven video technology plays out, particularly OpenAI’s Sora. “I know that a lot of people in content generation are nervous about these tools replacing them, but I don’t think we hire an artist for their ability to draw, we hire them for their ability to draw what is in their imagination, and that is where their genius lies,” he says. “I am not sure that artists will ever stop creating wonderful works, and these technologies will just enhance that.” Tiscovschi agrees with Redmond’s outlook, stating that “this is just the beginning”. “We will continuously see more teams of humans and their AI agents or tools working together to achieve tasks,” he says. “A human quickly mining their organisation’s IP, automating repetitive tasks and then collaborating with their AI copilot on a report or piece of code will have a constantly growing multiplier on their productivity.”


5 Strategies for Better Results from an AI Code Assistant

The first step is to provide the GPT with high-level context. In her scenario, Phil demonstrates by building a Markdown editor. Since Copilot has no idea of the context, he has to provide it and he does this with a large prompt comment with step-by-step instructions. For instance, he tells the copilot, “Make sure we have support for bold, italics and bullet points” and “Can you use reactions in the React markdown package.” The prompt enables Copilot to create a functional but unsettled markdown editor. ... Follow up by providing the Copilot with specific details, Scarlett advised. “If he writes a column that says get data from [an] API, then GitHub Copilot may or may not know what he’s really trying to do, and it may not get the best result. It doesn’t know which API he wants to get the data from or what it should return,” Scarlett said. “Instead, you can write a more specific comment that says use the JSON placeholder API, pass in user IDs, and return the users as a JSON object. That way we can get more optimal results.”


ESG research unveils critical gaps in responsible AI practices across industries

In light of the ESG Research findings, Qlik recognises the imperative of aligning AI technologies with responsible AI principles. The company’s initiatives in this area are grounded in providing robust data management and analytics capabilities, essential for any organisation aiming to navigate the complexities of AI responsibly. Qlik underscores the importance of a solid data foundation, which is critical for ensuring transparency, accountability, and fairness in AI applications. Qlik’s commitment to responsible AI extends to its approach to innovation, where ethical considerations are integrated into the development and deployment of its solutions. By focusing on creating intuitive tools that enhance data literacy and governance, Qlik aims to address key challenges identified in the report, such as ensuring AI explainability and managing regulatory compliance effectively. Brendan Grady, General Manager, Analytics Business Unit at Qlik, said, “The ESG Research echoes our stance that the essence of AI adoption lies beyond technology—it’s about ensuring a solid data foundation for decision-making and innovation. 


Applying DevSecOps principles to machine learning workloads

Unlike in a conventional software development environment with an integrated development environment (IDE), data scientists typically write code using Jupyter Notebooks. This takes place outside of an IDE, and often outside of the traditional DevSecOps lifecycle. As a result, it’s possible for a data scientist who is not trained on secure development techniques to put sensitive data at risk, by storing unprotected secrets or other sensitive information in a notebook. Simply put, the same tools and protections used in the DevSecOps world aren’t effective for ML workloads. The complexity of the environment also matters. Conventional development cycles usually lead directly to a software application interface or API. In the machine learning space, the focus is iterative, building a trainable model that leads to better outcomes. ML environments produce large quantities of serialized files necessary for a dynamic environment. The upshot? Organizations can become overwhelmed by the inherent complexities of versioning and integration.


Introducing Wi-Fi 7 access points that deliver more

This idea that the access point (AP) can do more than just route traffic is a core part of our product philosophy, and we’ve consistently expanded on that over multiple Wi-Fi generations with the addition of location services, IoT protocol support, and extensive network telemetry for security and AIOps. As organizations continue to innovate, and leverage applications that require more bandwidth or more IoT devices to support new digital use cases, the AP must continue to do more. Delivering solutions that go beyond standards is part of HPE Aruba Networking’s history and future. Now, with the introduction of 700 series access points that support Wi-Fi 7, we are doubling IoT capabilities with dual BLE 5.4 or 802.15.4/Zigbee radios and dual USB interfaces and improving location precision for use cases such as asset tracking and real-time inventory tracking. Moreover, we are using both the resources and the management of the AP to its full potential by delivering ubiquitous high-performance connectivity and processing at the edge. What this means is that these access points not only have optimal support for the 2.4, 5, and 6 GHz spectrum but also enough memory and compute capacity to run containers.


Why Your Enterprise Should Create an Internal Talent Marketplace

Strategically, an internal talent marketplace is a way to empower employees to be in the driver’s seat of their career journey, says Gretchen Alarcon, senior vice president and general manager of employee workflows at software and cloud platform provider ServiceNow, via email. "Tactically, it's a platform driven by technology that uses AI to match existing talent to open roles or projects within the organization," she explains. "It provides a more transparent view of new opportunities for employees and identifies untapped employee potential based on skills rather than anecdotes." ... A talent marketplace is only as good as the information it contains, Williamson warns. "Organizations should emphasize to employees that it's in their interest to keep the skills and preferences in their profiles up to date," he says. Managers. meanwhile, need to define the exact critical skills needed to be successful in a particular job or role. "That information drives recommended opportunities for employees and increases their chances of being identified by project managers to fill roles."



Quote for the day:

"Rarely have I seen a situation where doing less than the other guy is a good strategy." -- Jimmy Spithill