Daily Tech Digest - July 11, 2024

Will AI Ever Pay Off? Those Footing the Bill Are Worrying Already

Though there is some nervousness around how long soaring demand can last, no one doubts the business models for those at the foundations of the AI stack. Companies need the chips and manufacturing they, and they alone, offer. Other winners are the cloud companies that provide data centers. But further up the ecosystem, the questions become more interesting. That’s where the likes of OpenAI, Anthropic and many other burgeoning AI startups are engaged in the much harder job of finding business or consumer uses for this new technology, which has gained a reputation for being unreliable and erratic. Even if these flaws can be ironed out (more on that in a moment), there is growing worry about a perennial mismatch between the cost of creating and running AI and what people are prepared to pay to use it. ... Another big red flag, economist Daron Acemoglu warns, lies in the shared thesis that by crunching more data and engaging more computing power, generative AI tools will become more intelligent and more accurate, fulfilling their potential as predicted. His comments were shared in a recent Goldman Sachs report titled “Gen AI: Too Much Spend, Too Little Benefit?”


How top IT leaders create first-mover advantage

“Some of the less talked about aspects of a high-performing team are the human traits: trust, respect, genuine enjoyment of each other,” Sample says. “I’m looking at experience and skills, but I’m also thinking about how the person will function collaboratively with the team. Do I believe they’ll have the best interest of the team at heart? Can the team trust their competency? Sample also says he focuses on “will over skill.” “Qualities like curiosity and craftsmanship are sustainable, flexible skills that can evolve with whatever the new ‘toy’ in technology is,” he says. “If you’re approaching work with that bounty of curiosity and that willing mindset, the skills can adapt.” ... Steadiness and calm from the leader create the kind of culture where people are encouraged to take risks and work together to solve big problems and execute on bold agendas. That, ultimately, is what enables a technology organization to capitalize on innovative technologies. In fact, reflecting on his legacy as a CIO, Sample believes it’s not really about the technology; it’s about the people. His success, he says, has been in building the teams that operate the technology.


Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?

The patchwork approach is used by federal agencies in the US. Different agencies have responsibility for different verticals and can therefore introduce regulations more relevant to specific organizations. For example, the FCC regulates interstate and international communications, the SEC regulates capital markets and protects investors, and the FTC protects consumers and promotes competition. ... The danger is that the EU’s recent monolithic AI Act will go the same way as GDPR. Kolochenko prefers the US model. He believes the smaller, more agile method of targeted regulations used by US federal agencies can provide better outcomes than the unwieldy and largely static monolithic approach adopted by the EU. ... To regulate or not to regulate is a rhetorical question – of course AI must be regulated to minimize current and future harms. The real questions are whether it will be successful (no, it will not), partially successful (perhaps, but only so far as the curate’s egg is good), and will it introduce new problems for AI-using businesses (from empirical and historical evidence, yes).


The Team Sport of Cloud Security: Breaking Down the Rules of the Game

Cloud security today is too complicated to fall on the shoulders of one person or party. For this reason, most cloud services operate on a shared responsibility model that divvies security roles between the CSP and the customer. Large players in this space, such as AWS and Microsoft Azure, have even published frameworks to the lines of liability in the sand. While the exact delineations can change depending on the service model ... However, while the expectations laid out in shared responsibility models are designed to reduce confusion, customers often struggle to conceptualize what this framework looks like in practice. And unfortunately, when there’s a lack of clarity, there’s a window of opportunity for threat actors. ... The best-case scenario for mitigating cloud security risks is when CSPs and customers are transparent and aligned on their responsibilities right from the beginning. Even the most secure cloud services aren’t foolproof, so customers need to be aware of what security elements they’re “owning” versus what falls in the court of their CSP. 


AI's new frontier: bringing intelligence to the data source

There has been a shift with organisations exploring how to bring AI to their data rather than uploading proprietary data to AI providers. This shift reflects a growing concern for data privacy and the desire to maintain control over proprietary information. Business leaders believe they can better manage security and privacy while still benefiting from AI advancements by keeping data in-house. Bringing AI solutions directly to an organisation’s data eliminates the need to move vast amounts of data, reducing security risks and maintaining data integrity. Crucially, organisations can maintain strict control over their data by implementing AI solutions within their own infrastructure to ensure that sensitive information remains protected and complies with privacy regulations. Additionally, keeping data in-house minimises the risks associated with data breaches and unauthorised access from third parties, providing peace of mind for both the organisation and its clients. Advanced AI-driven data management tools deliver this solution to businesses, automating data cleaning, validation, and transformation processes to ensure high-quality data for AI training.


How AI helps decode cybercriminal strategies

The biggest use case for AI is its ability to process, analyze, and interpret natural language communication efficiently. AI algorithms can quickly identify patterns, correlations, and anomalies within massive datasets, providing cybersecurity professionals with actionable insights. This capability not only enhances the speed and accuracy of threat detection but also enables a more proactive and comprehensive approach to securing organizations against dark web-originated threats. This is vital in an environment where the difference between detecting a threat early in the cyber kill chain vs once the attacker has achieved their objective can be hundreds of thousands of dollars. ... Another potential use case of AI is in quickly identifying and alerting specific threats relating to an organization, helping with the prioritization of intelligence. One thing an AI could look for in data is intention – to assess whether an actor is planning an attack, is asking for advice, is looking to buy or to sell access or tooling. Each of these indicates a different level of risk for the organization, which can inform security operations.


Widely Used RADIUS Authentication Flaw Enables MITM Attacks

The attack scenario - researchers say a "a well-resourced attacker" could make it practical - fools the Remote Authentication Dial-In User Service into granting access to a malicious user without the attacker having to know or guess a login password. Despite its 1990s heritage and reliance on the MD5 hashing algorithm, many large enterprises still use the RADIUS protocol for authentication to the VPN or Wi-Fi network. It's also "universally supported as an access control method for routers, switches and other network infrastructure," researchers said in a paper published Tuesday. The protocol is used to safeguard industrial control systems and 5G cellular networks. ... For the attack to succeed, the hacker must calculate a MD5 collision within the client session timeout, where the common defaults are either 30 seconds or 60 seconds. The 60-second default is typically for users that have enabled multifactor authentication. That's too fast for the researchers, who were able to reduce the compute time down to minutes from hours, but not down to seconds. An attacker working with better hardware or cloud computing resources might do better, they said.


Can RAG solve generative AI’s problems?

Currently, RAG offers probably the most effective way to enrich LLMs with novel and domain-specific data. This challenge is particularly important for such systems as chatbots, since the information they generate must be up to date. However, RAG cannot reason iteratively, which means it is still dependent on the underlying dataset (knowledge base, in RAG’s case). Even though this dataset is dynamically updated, if the information there isn’t coherent or is poorly categorized and labeled, the RAG model won’t be able to understand that the retrieval data is irrelevant, incomplete, or erroneous. It would also be naive to expect RAG to solve the AI hallucination problem. Generative AI algorithms are statistical black boxes, meaning that developers do not always know why the model hallucinates and whether it is caused by insufficient or conflicting data. Moreover, dynamic data retrieval from external sources does not guarantee there are no inherent biases or disinformation in this data. ... Therefore, RAG is in no way a definitive solution. In the case of sensitive industries, such as healthcare, law enforcement, or finance, fine-tuning LLMs with thoroughly cleaned, domain-specific datasets might be a more reliable option.


Navigating the New Data Norms with Ethical Guardrails for Ethical AI

To convert ethical principles into a practical roadmap, businesses need a clear framework aligned with industry standards and company values. Also, beyond integrity and fairness, businesses must demonstrate tangible ROI by focusing on metrics like customer acquisition cost, lifetime value, and employee engagement. Operationalizing ethical guardrails involves creating a structured approach to ensure AI deployment aligns with ethical standards. Companies can start by fostering a culture of ethics through comprehensive employee education programs that emphasize the importance of fairness, transparency, and accountability. Establishing clear policies and guidelines is crucial, alongside implementing robust risk assessment frameworks to identify and mitigate potential ethical issues. Regular audits and continuous monitoring should be part of the process to ensure adherence to these standards. Additionally, maintaining transparency for end-users by openly sharing how AI systems make decisions, and providing mechanisms for feedback, further strengthens trust and accountability.
 

How CIOs Should Approach DevOps

CIOs should have a vision for scaling DevOps across the enterprise for unlocking its full range of benefits. A collaborative culture, automation, and technical skills are all necessary for achieving scale. Besides these, the CIO needs to think about the right team structure, security landscape, and technical tools that will take DevOps safely from pilot to production to enterprise scale. It is recommended to start small: dedicate a small platform team focused only on building a platform that enables automation of various development tasks. Build the platform in small steps, incrementally and iteratively. Put together another small team with all the skills required to deliver value to customers. Constantly gather customer feedback and incorporate it to improve development at every stage. Ultimately, customer satisfaction is what matters the most in any DevOps program. Security needs to be part of every DevOps process right from the start. When a process is automated, so should its security and compliance aspects. Frequent code reviews and building awareness among all the concerned teams will help to create secure, resilient applications that can be scaled with confidence.



Quote for the day:

“There is no failure except in no longer trying.” -- Chris Bradford

Daily Tech Digest - July 10, 2024

How platform teams lead to better, faster, stronger enterprises

Platform teams are uniquely equipped to optimize resource allocation because they sit in between developers and the cloud infrastructure and compute that developers need, and are able to maximize the efficiency and effectiveness of software development processes. With their unique set of skills and expertise, they effectively collaborate with other teams, including developers, data scientists, and operations teams, to accurately understand their needs and pain points. Using a product approach, platform teams remove barriers for developers and operations teams by offering shared services for developer self-service, enabling faster modernization within organizational boundaries and automation to simplify the management of applications and Kubernetes clusters in the cloud. Fostering a culture of innovation, platform teams play a crucial role in keeping the organization at the forefront of emerging trends and technologies. This enables enterprises to provide innovative solutions that set them apart in the market.


Developing An AI Uuse Policy

An AI Use Policy is designed to ensure that any AI technology used by your business is done so in a safe, reliable and appropriate manner that minimises risks. It should be developed to inform and guide your employees on how AI can be used within your business. ... Perhaps the most important part for the majority of your employees, set specific do’s and don’ts for inputs and outputs. This is to ensure compliance with data security, privacy and ethical standards. For example, “Don’t input any company confidential, commercially sensitive or proprietary information”, “Don’t use AI tools in a way that could inadvertently perpetuate or reinforce bias” and “Don’t input any customer or co-worker’s personal data”. For outputs, guidance can reiterate to staff the potential for misinformation or ‘hallucinations’ generated by AI. Consider rules such as “Clearly label any AI generated content”, “Don’t share any output without careful fact-checking” or “Make sure that a human has the final decision when using AI to help make a decision which could impact any living person


Synergy between IoT and blockchain transforming operational efficiency

The synergy between the two technologies is integral to achieving Industry 4.0 goals, including digital transformation, decentralised connectivity, and smart industry advancements. Via this integration, organisations can achieve real-time visibility into production operations, optimise supply chain processes, and enhance overall efficiency. ... In regulated industries like pharmaceutical manufacturing, where compliance is crucial, integrating IoT and Blockchain lets companies onboard suppliers to upload raw material info, batch numbers, and quality checks to a blockchain ledger. IoT devices automate data acquisition during manufacturing and storage, ensuring data integrity and transparency. In smart city ecosystems, local authorities share data with service providers for waste management, traffic updates, and more. Traffic data from sensors can be securely uploaded to a blockchain, where third-party services like food delivery and ridesharing can access it to optimise operations. Logistics companies use IoT systems to gather data on location and handling, which is uploaded to a blockchain ledger to track goods, estimate delivery time, and provide real-time updates.


Ignore Li-ion fire risks at your peril

Li-ion batteries are prone to destructive and hard-to-control fires. There have been several reported incidents in data centers, some of which have led to serious outages, but they are not well-documented or systematically studied. ... A commonly held view is that Li-ion’s fire risk in the data center is overstated, partly as a result of marketing by vendors of alternative chemistries such as salt and nickel-zinc. If these products are promoted as a “safe” alternative, then it will (it is speculated) create a perception that Li-ion is “unsafe.” After assessing the evidence, examining the science, and hearing from data center operators at recent member meetings, Uptime Institute is taking a cautious and practicable stance at this point. While it is true that Li-ion batteries have a higher risk of fire compared with other chemistries, and these fires are particularly problematic, Uptime Institute engineers do not think Li-ion batteries should be rejected out of hand. ... Data center builders and operators should carefully consider the benefits of Li-ion batteries alongside the risks. As well as the obvious risk of serious fires, there are financial and reputational risks in preparing for, avoiding, and responding to such incidents.


More than a CISO: the rise of the dual-titled IT leader

Dual-title roles give CISOs new levers to work with and more scope to drive strategic integration and alignment of cybersecurity within the organization. ... Belknap finds having his own team of engineers puts him in a stronger position when working with partners. When looking for support or assistance with a project, his team will have already built something, reducing the amount of work needed from the partner team. “This means we can lean on them to be responsible for the things that only they can do. I don’t have to pull them into the work that only I can do or the work that’s not aligned to their expertise,” he says. These dual-title roles also recognize how CISOs are increasingly operating as technology leaders and operators of the organization, according to Adam Ely, head of digital products at Fidelity Investments who was formerly the firm’s CISO and has a long history in security. Ely says that as CISOs typically work across an organization, know how the business lines work, and are day-to-day leaders of people and technology as well as crisis managers, it stands them in good stead for dual-title or more senior positions. 


You Can’t Wish Away Technology Complexity

Every business succeeds because of technology. Every person gets paid by technology. The value of our currency itself is about technology. Of course, it is not only about technology. But tell that to the CFO or CLO. When it is about finance, there is very little pushback in saying it is all about money. When it is about legal, there is no push-back about it being about law. I’ve noticed only technologists pull back and say, “You’re right, it’s not about technology.” ... See what people often forget that technology complexity is cool on multiple levels. It gives us the ability to make different choices for stakeholders and customers (I mean real customers not stakeholders that think they are customers – note to business stakeholders, you and I get our paychecks from the same place, you are not my customer. Our customer is my customer). But while this complexity allows for choice, it also creates a dependency on understanding those choices. Or a dependency on a professional who does. I don’t pretend to understand medicine. That is why I ask doctors what to do.


Electronic Health Record Errors Are a Serious Problem

The exposure of healthcare records, in even minor ways, leaves patients highly vulnerable. “I never reached out to this woman [whose records were entered into my father’s], but I had all her contact information. I could have gone to her house and handed her the copy of the results I had found in my dad’s records,” Hollingsworth says. ... Data aggregators pose a further risk. These organizations may collect deidentified data to perform analyses on population-level health issues for both healthcare organizations and insurance companies. “Are they following the same security standards that we follow in the health care transaction world?” Ghanayem asks. “I don’t know.” ... Clear distinctions between important information fields must be made to cut down on adjacency errors. Concise patient summaries at the beginning of each record and usable search features may increase usability and decrease frustration that leads to the introduction of errors. And refining when alerts are issued can decrease alert fatigue, which may lead providers to simply ignore alerts even when they are valid.


Diversifying cyber teams to tackle complex threats

To make a significant change and deliver a more diverse cyber workforce, we need to focus on leadership and change our language and processes for recruitment. This takes courage and is the biggest challenge organizations face. Having a diverse team helps others see it is a place for them. It isn’t just about attracting talent; it’s also about openness and retaining talent. Organizations need to help individuals from diverse backgrounds to see themselves as role models who need to be out shouting about the opportunities within the sector. Diversity fosters a sense of belonging and inclusivity making the cybersecurity field more attractive to a wider range of individuals. When potential recruits see relatable role models within a team, it breaks down the traditional and somewhat homogenous perception of cybersecurity. This inclusivity is crucial for attracting talent from underrepresented groups, particularly women and minority groups, who may not have traditionally seen themselves in cybersecurity roles. A diverse team with strong role models creates a positive feedback loop. 


Nanotechnology and SRE: Pioneering Precision in Performance

Nanotechnology offers the opportunity to transform SRE at the atomic level — addressing individual tasks, subtasks, and tickets. For example, extra-sensitive nanosensors can continuously monitor system performance metrics, including temperature, voltage, and processing load. When placed in data centers, these sensors enable real-time data collection and analysis, detecting electrical and mechanical issues before they escalate and extending the lifespan of technological components. Nanobots can be programmed to address hardware issues and routine maintenance tasks. Together, these technologies can integrate into a self-healing and continuously improving system in line with SRE principles. ... Nanotechnology can potentially transform SRE, leading to enhanced system reliability and performance. Nanotechnology-enabled solutions can allow more precise monitoring, optimization, and real-time improvements, supporting the key pillars of SRE. At the same time, the foundational principles of SRE can be applied to ensure the reliability of advanced nanotechnology systems. 


Three Areas Where AI Can Make a Huge Difference Without Significant Job Risk

Doing a QC job can be annoying because even though the job is critical to the outcome, your non-QC peers and management treat you like a potentially avoidable annoyance. You stand in the way of shipping on time and at volume, potentially delaying or even eliminating performance-based bonuses. We are already discovering that to assure the quality of an AI-driven coding effort, a second AI is needed to assure the quality of the result because people just don’t like doing QC on code, particularly those who create it. ... In short, properly applied AI could highlight and help address problems that are critically reducing a company’s ability to perform to its full potential and preventing it from becoming a great place to work. ... Calculating an employee’s contribution and then using it to set compensation transparently should significantly reduce the number of employees who feel they are being treated unfairly by eliminating that unfairness or by showing them a path to improve their value and thus positively impact their pay.



Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal

Daily Tech Digest - July 09, 2024

AI stack attack: Navigating the generative tech maze

Successful integration often depends on having a solid foundation of data and processing capabilities. “Do you have a real-time system? Do you have stream processing? Do you have batch processing capabilities?” asks Intuit’s Srivastava. These underlying systems form the backbone upon which advanced AI capabilities can be built. For many organizations, the challenge lies in connecting AI systems with diverse and often siloed data sources. Illumex has focused on this problem, developing solutions that can work with existing data infrastructures. “We can actually connect to the data where it is. We don’t need them to move that data,” explains Tokarev Sela. This approach allows enterprises to leverage their existing data assets without requiring extensive restructuring. Integration challenges extend beyond just data connectivity. ... Security integration is another crucial consideration. As AI systems often deal with sensitive data and make important decisions, they must be incorporated into existing security frameworks and comply with organizational policies and regulatory requirements.


How to Architect Software for a Greener Future

Firstly, it’s a time shift, moving to a greener time. You can use burstable or flexible instances to achieve this. It’s essentially a sophisticated scheduling problem, akin to looking at a forecast to determine when the grid will be greenest—or conversely, how to avoid peak dirty periods. There are various methods to facilitate this on the operational side. Naturally, this strategy should apply primarily to non-demanding workloads. ... Another carbon-aware action you can take is location shifting—moving your workload to a greener location. This approach isn’t always feasible but works well when network costs are low, and privacy considerations allow. ... Resiliency is another significant factor. Many green practices, like autoscaling, improve software resilience by adapting to demand variability. Carbon awareness actions also serve to future-proof your software for a post-energy transition world, where considerations like carbon caps and budgets may become commonplace. Establishing mechanisms now prepares your software for future regulatory and environmental challenges.


Evaluating board maturity: essential steps for advanced governance

Most boards lack a firm grasp of fundamental governance principles. I'd go so far as to say that 8 or 9 out of 10 boards could be described this way. Your average board director is intelligent and respected within their communities. But they often don't receive meaningful governance training. Instead, they follow established board norms without questioning them, which can lead to significant governance failures. Consider Enron, Wells Fargo, Volkswagen AG, Theranos, and, recently, Boeing—all have boards filled with recognized experts. However inadequate oversight caused or allowed them to make serious and damaging errors. This is most starkly illustrated by Barney Frank, co-author of the Dodd-Frank Act (passed following the 2008 financial crisis) and a board member of Silicon Valley Bank while it collapsed. Having brilliant board members doesn't guarantee effective governance. The point is that, for different reasons, consultants and experts can 'misread' where a board is at. Frankly, this is most often due to just being lazy. But sometimes it is due to just not being clear about what to look for.


Mastering Serverless Debugging

Feature flags allow you to enable or disable parts of your application without deploying new code. This can be invaluable for isolating issues in a live environment. By toggling specific features on or off, you can narrow down the problematic areas and observe the application’s behavior under different configurations. Implementing feature flags involves adding conditional checks in your code that control the execution of specific features based on the flag’s status. Monitoring the application with different flag settings helps identify the source of bugs and allows you to test fixes without affecting the entire user base. ... Logging is one of the most common and essential tools for debugging serverless applications. I wrote and spoke a lot about logging in the past. By logging all relevant data points, including inputs and outputs of your functions, you can trace the flow of execution and identify where things go wrong. However, excessive logging can increase costs, as serverless billing is often based on execution time and resources used. It’s important to strike a balance between sufficient logging and cost efficiency. 


Implementing Data Fabric: 7 Key Steps

As businesses generate and collect vast amounts of data from diverse sources, including cloud services, mobile applications, and IoT devices, the challenge of managing, processing, and leveraging this data efficiently becomes increasingly critical. Data fabric emerges as a holistic approach to address these challenges by providing a unified architecture that integrates different data management processes across various environments. This innovative framework enables seamless data access, sharing, and analysis across the organization irrespective of where the data resides – be it on-premises or in multi-cloud environments. The significance of data fabric lies in its ability to break down silos and foster a collaborative environment where information is easily accessible and actionable insights can be derived. By implementing a robust data fabric strategy, businesses can enhance their operational efficiency, drive innovation, and create personalized customer experiences. Implementing a data fabric strategy involves a comprehensive approach that integrates various Data Management and processing disciplines across an organization.


Empowering Self-Service Users in the Digital Age

Ultimately, portals must strike the balance between freedom and control, which can be achieved by ensuring flexibility with role-based access control. Granting end users the freedom to deploy within a secure framework of predefined permissions creates an environment ripe for innovation within a robustly protected environment. This means users can explore, experiment and innovate without concerns about security boundaries or unnecessary hurdles. But of course, as with any project, organizations can’t afford to build something and consider that job done. Measuring success is ongoing. Metrics such as how often the portal is accessed, who uses what, which service catalogs are used and how the portal usage should be tracked, along with other relevant data will help point to any areas that need improvement. It is also important to remember that it is collaborative work between the platform team and end users. And in technology, there is always room for improvement. For instance, recent advances in AI/ML could soon be leveraged to analyze previously inaccessible datasets and generate smarter and faster decision-making.


Desperate for power, AI hosts turn to nuclear industry

As opposed to adding new green energy to meet AI’s power demands, tech companies are seeking power from existing electricity resources. That could raise prices for other customers and hold back emission-cutting goals, according The Wall Street Journal and other sources. According to sources cited by the WSJ, the owners of about one-third of US nuclear power plants are in talks with tech companies to provide electricity to new data centers needed to meet the demands of an artificial-intelligence boom. ... “The power companies are having a real problem meeting the demands now,” Gold said. “To build new plants, you’ve got to go through all kinds of hoops. That’s why there’s a power plant shortage now in the country. When we get a really hot day in this country, you see brownouts.” The available energy could go to the highest bidder. Ironically, though, the bill for that power will be borne by AI users, not its creators and providers. “Yeah, [AWS] is paying a billion dollars a year in electrical bills, but their customers are paying them $2 billion a year. That’s how commerce works,” Gold said.


Fake network traffic is on the rise — here’s how to counter it

“Attempting to homogenize the bot world and the potential threat it poses is a dangerous prospect. The fact is, it is not that simple, and cyber professionals must understand the issue in the context of their own goals...” ... “Cyber professionals need to understand the bot ecosystem and the resulting threats in order to protect their organizations from direct network exploitation, indirect threat to the product through algorithm manipulation, and a poor user experience, and the threat of users being targeted on their platform,” Cooke says. “As well as [understanding] direct security threats from malicious actors, cyber professionals need to understand the impact on day-to-day issues like advertising and network management from bot profiles as a whole,” she adds. “So cyber professionals must ensure that the problem is tackled holistically, protecting their networks, data and their users from this increasingly sophisticated threat. Measures to detect and prevent malicious bot activity must be built into new releases, and cyber professionals should act as educational evangelists for users to help them help themselves with a strong awareness of the trademarks of fake traffic and malicious profiles.” 


Researchers reveal flaws in AI agent benchmarking

Since calling the models underlying most AI agents repeatedly can increase accuracy, researchers can be tempted to build extremely expensive agents so they can claim top spot in accuracy. But the paper described three simple baseline agents developed by the authors that outperform many of the complex architectures at much lower cost. ... Two factors determine the total cost of running an agent: the one-time costs involved in optimizing the agent for a task, and the variable costs incurred each time it is run. ... Researchers and those who develop models have different benchmarking needs to those downstream developers who are choosing an AI to use their applications. Model developers and researchers don’t usually consider cost during their evaluations, while for downstream developers, cost is a key factor. “There are several hurdles to cost evaluation,” the paper noted. “Different providers can charge different amounts for the same model, the cost of an API call might change overnight, and cost might vary based on model developer decisions, such as whether bulk API calls are charged differently.”


10 ways to prevent shadow AI disaster

Shadow AI is practically inevitable, says Arun Chandrasekaran, a distinguished vice president analyst at research firm Gartner. Workers are curious about AI tools, seeing them as a way to offload busy work and boost productivity. Others want to master their use, seeing that as a way to prevent being displaced by the technology. Others became comfortable with AI for personal tasks and now want the technology on the job. ... shadow AI could cause disruptions among the workforce, he says, as workers who are surreptitiously using AI could have an unfair advantage over those employees who have not brought in such tools. “It is not a dominant trend yet, but it is a concern we hear in our discussions [with organizational leaders],” Chandrasekaran says. Shadow AI could introduce legal issues, too. ... “There has to be more awareness across the organization about the risks of AI, and CIOs need to be more proactive about explaining the risks and spreading awareness about them across the organization,” says Sreekanth Menon, global leader for AI/ML services at Genpact, a global professional services and solutions firm. 



Quote for the day:

“In matters of principle, stand like a rock; in matters of taste, swim with the current. ” -- Thomas Jefferson

Daily Tech Digest - July 08, 2024

How insurtech startups are addressing the challenges of slow processes in the insurance sector

Even though compliance and regulation are critical for the security of both the insurers and customers, the regulatory process could be quite long. Compliance requirements demand meticulous attention to detail and can significantly prolong the approval process for new products and services. Another factor can be risk aversion. It (risk aversion) within the industry fosters a culture of caution, where insurers are hesitant to embrace change and experiment with new approaches to product development and underwriting. ... One of the solutions for these industrial challenges lies in the collaboration of the insurance sector and the latest technologies. Insurtech solutions offer myriad innovative tools and technologies that promise to streamline product development and automate underwriting processes. One such solution gaining traction is artificial intelligence (AI) and machine learning algorithms, which can analyse vast amounts of data in real time to assess risk and expedite underwriting decisions. 


Transforming Business Practices Through Augmented Intelligence

While AI raises apprehensions about potential job displacement, viewing it solely as a threat overlooks its capacity to enhance human capabilities, as evidenced by historical technological advancements. Training and education play a key role in this process, as AI has become an integral part of our reality and must be harnessed to its full potential. It is essential to align the use of artificial intelligence with the overall strategy of the organization for smooth integration of applications with data, processes, and collaboration between stakeholders. In a landscape where the internet simplifies transactions, software provides tools, and AI leverages data to make informed decisions, training and education become crucial. ... At its core, technology has always revolved around processing data. When viewed through the lens of enterprise architecture, an AI-powered machine learning tool can adeptly craft roadmaps tailored for businesses. Through advanced AI analytics, automation, and recommendation systems, enterprise architecture facilitates more informed and expedited decision-making processes.


Request for proposal vs. request for partner: what works best for you?

An RFProposal is an efficient choice when the nature of the work is standardized, while an RFPartner is the better choice when the buying organization is seeking a strategic partner for the overall best fit to meet its needs. ... When organizations shift to wanting to find a partner with the best possible solution, it’s important to understand the nature of the selection criteria change. With an RFPartner, buyers evaluate suppliers not only based on technical capabilities but also on the best value of the solution. ... “On the surface, an RFPartner sounds like a heavy lift, but we find that the overall time and effort is about the same,” he says. “In an RFProposal, the buyer is spending more time upfront defining the specs and in contentious negotiations. The RFPartner process flips this on its head and creates a more integrated bid solution that generates better solutions, spending more time together with the supplier co-creating, especially if your aim is making the shift to a highly collaborative vested business model to achieve strategic business outcomes.”


If you’re a CISO without D&O insurance, you may need to fight for it

D&O insurance covers the personal liabilities of corporate directors and officers in the event of incidents that lead to financial losses, reputational damage, or legal consequences. Without adequate D&O coverage, CISOs are left vulnerable, highlighting the need for this in an organization’s risk-management strategy. ... Lisa Hall, CISO at privately held Safebase, agrees that CISOs at all companies should be covered under their organizations’ D&O insurance policies, particularly in light of these new regulations. “I do think adding CISOs to D&O insurance will be more and more of a thing, and there is, for sure, more chatter in my CISO groups about how companies are handling this,” she says. “A lot of CISOs are also taking out errors and omissions insurance personally. I have that just for the consulting and advisory work I do.” ... “A lot of CISOs are thinking about this, especially after SolarWinds,” she says. “And if we feel that we’re not 100% protected for any decision we make, and we can be personally liable for a breach or possible incident even if we do the right thing, it’s really pushing CISOs to say, ‘Hey, company, I’ll join if you cover me or give me a different title.’ “


How DORA is fortifying Europe’s financial future with a new take on operational resilience

For DORA, digital operational resilience very simply means “the ability of a financial entity to build, assure, and review its operational integrity and reliability by ensuring, either directly or indirectly through the use of services provided by ICT third-party service providers, the full range of ICT-related capabilities needed to address the security of the network and information systems which a financial entity uses, and which support the continued provision of financial services and their quality, including throughout disruptions”. Developing on this statement in a conversation with FinTech Futures, Simon Treacy, a senior associate at global law firm Linklaters, describes DORA as “a very prescriptive framework for financial entities, primarily to build and improve the way that they manage ICT risk”. “It applies very broadly across the EU regulated financial sector,” he continues, “and really part of its aim is to harmonise standards so that the smallest payments firm is subject to the same rules for operational resilience as the biggest banks and insurers.”


Data Sprawl: Continuing Problem for the Enterprise or an Untapped Opportunity?

Data fabric technologies excel in integrating and managing data across various environments. However, they often focus on conventional data sources like databases, data lakes, or data warehouses. The result is a gap in integrating and extracting value from data residing in numerous SaaS applications, as they may not seamlessly fit into these traditional data repositories. The combined solution of data fabric and iPaaS can address complex business challenges, such as integrating data from SaaS applications with traditional data sources. This capability is particularly valuable in today’s business landscape, where data is increasingly scattered across various cloud and on-premises environments. The merging of data fabric and iPaaS technologies offers a groundbreaking solution to this challenge, opening the door to new opportunities in data management and analysis. The integration of data fabric with iPaaS addresses the complexity and expertise-dependency in iPaaS. Data fabric can enable users to discover, understand, and verify data before integration flows are built. 


AI’s moment of disillusionment

AI, whether generative AI, machine learning, deep learning, or you name it, was never going to be able to sustain the immense expectations we’ve foisted upon it. I suspect part of the reason we’ve let it run so far for so long is that it felt beyond our ability to understand. It was this magical thing, black-box algorithms that ingest prompts and create crazy-realistic images or text that sounds thoughtful and intelligent. And why not? The major large language models (LLMs) have all been trained on gazillions of examples of other people being thoughtful and intelligent, and tools like ChatGPT mimic back what they’ve “learned.” ... We go through this process of inflated expectations and disillusionment with pretty much every shiny new technology. Even something as settled as cloud keeps getting kicked around. My InfoWorld colleague, David Linthicum, recently ripped into cloud computing, arguing that “the anticipated productivity gains and cost savings have not materialized, for the most part.” I think he’s overstating his case, but it’s hard to fault him, given how much we (myself included) sold cloud as the solution for pretty much every IT problem.


How nation-state cyber attacks disrupt public services and undermine citizen trust

While nation-states do have advanced capabilities and visibility that are hard or impossible for cyber criminals to replicate, the general strategy for attackers is to target vulnerable perimeter devices such as VPNs or firewalls as an entry point to the network. Next they focus on obtaining privileged credentials while leveraging legitimate software to masquerade as normal activity while they scout the environments for valuable data or large repositories to disrupt. It’s important to note that the commonly exploited vulnerabilities in government IT systems are not distinctly different from the vulnerabilities exploited more broadly. Government IT systems are often extremely diverse and thus, subject to a variety of exploits. ... Currently, there are numerous policies and regulations, both domestically and internationally, which are inconsistent and vary in their requirements. These administrative requirements take significant resources which could otherwise be used to strengthen a company’s cybersecurity program. 


How Quantum Computing Will Revolutionize Cloud Analytics

As we peer into the future of quantum computing in cloud analytics, the emphasis on collaboration and continuous innovation becomes undeniable. Integrating quantum technologies with cloud systems is not just a technological upgrade but a paradigm shift requiring robust partnerships across academia, industry, and government sectors. For instance, IBM’s quantum network includes over 140 members, including start-ups, research labs, and educational institutions, working together to advance quantum computing. This collaborative model is essential because the challenges in quantum computing are not just about hardware or software alone but about creating an ecosystem that supports an entirely new kind of computing. That ecosystem comprises components such as quantum hardware development, quantum algorithms, software tools, and educational resources. Also, it has made significant achievements, such as developing quantum hardware such as the IBM Quantum System One, advancing quantum algorithms for practical applications in chemistry and materials science, and creating the Qiskit software development kit to make quantum programming more accessible.


How continuous learning is reshaping the workforce

Gone are the days when lengthy training programs were sought after and people took breaks from their careers to pick up an upskilling program. Navpreet Singh highlights that upskilling will become an ongoing process integrated into the workday. “The focus will shift from acquiring specific job skills to fostering adaptability and lifelong learning. Critical thinking, problem-solving, and creativity will be paramount as automation takes over routine tasks. Traditional ways of learning may not always reflect the skills needed. Alternative credentials, like badges and micro-credentials, will showcase the specific skills employees possess, making them more competitive. By embracing this future of upskilling, we can ensure our workforce is adaptable, future-proof, and ready to drive innovation in the ever-evolving automotive industry,” explains Singh. Within the next decade or so, we will see greater demand for agile ed-tech tools that help employees learn on the go and prepare them for new roles, says Daniele Merlerati, Chief Regional Officer APAC, Baltics, Benelux at Gi Group Holding.



Quote for the day:

"Perseverance is failing nineteen times and succeeding the twentieth." -- Julie Andrews

Daily Tech Digest - July 07, 2024

How Good Is ChatGPT at Coding, Really?

A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI’s ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code—with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent—depending on the difficulty of the task, the programming language, and a number of other factors. While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code. ... Overall, ChatGPT was fairly good at solving problems in the different coding languages—but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively. “However, when it comes to the algorithm problems after 2021, ChatGPT’s ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems,” Tang notes.


What can devs do about code review anxiety?

A lot of folks reported that either they would completely avoid picking up code reviews, for example. So maybe someone's like, “Hey, I need a review,” and folks are like, “I'm just going to pretend I didn't see that request. Maybe somebody else will pick it up.” So just kind of completely avoiding it because this anxiety refers to not just getting your work reviewed, but also reviewing other people's work. And then folks might also procrastinate, they might just kind of put things off, or someone was like, “I always wait until Friday so I don't have to deal with it all weekend and I just push all of that until the very last minute.” So definitely you see a lot of avoidance. ... there is this misconception that only junior developers or folks just starting out experience code review anxiety, with the assumption that it's only because you're experiencing the anxiety when your work is being reviewed. But if you think about it, anytime you are a reviewer, you're essentially asked to contribute your expertise and so there is an element of, “If I mess up this review, I was the gatekeeper of this code. And if I mess it up, that might be my fault.” So there's a lot of pressure there.
 

Securing the Growing IoT Threat Landscape

What’s clear is that there should be greater collective responsibility between stakeholders to improve IoT security outlooks. A multi-stakeholder response is necessary, leading to manufacturers prioritising security from the design phase, to governments implementing legislation to mandate responsibility. Currently, some of the leading IoT issues relate to deployment problems. Alex suggests that IT teams also need to ensure default device passwords are updated and complex enough to not be easily broken. Likewise, he highlights the need for monitoring to detect malicious activity. “Software and hardware hygiene is essential, especially as IoT devices are often built on open source software, without any convenient, at scale, security hardening and update mechanisms,” he highlights. “Identifying new or known vulnerabilities and having an optimised testing and deployment loop is vital to plug gaps and prevent entry from bad actors.” A secure-by-design approach should ensure more robust protections are in place, alongside patching and regular maintenance. Alongside this, security features should be integrated from the start of the development process.


Beyond GPUs: Innatera and the quiet uprising in AI hardware

“Our neuromorphic solutions can perform computations with 500 times less energy compared to conventional approaches,” Kumar stated. “And we’re seeing pattern recognition speeds about 100 times faster than competitors.” Kumar illustrated this point with a compelling real-world application. ... Kumar envisions a future where neuromorphic chips increasingly handle AI workloads at the edge, while larger foundational models remain in the cloud. “There’s a natural complementarity,” he said. “Neuromorphics excel at fast, efficient processing of real-world sensor data, while large language models are better suited for reasoning and knowledge-intensive tasks.” “It’s not just about raw computing power,” Kumar observed. “The brain achieves remarkable feats of intelligence with a fraction of the energy our current AI systems require. That’s the promise of neuromorphic computing – AI that’s not only more capable but dramatically more efficient.” ... As AI continues to diffuse into every facet of our lives, the need for more efficient hardware solutions will only grow. Neuromorphic computing represents one of the most exciting frontiers in chip design today, with the potential to enable a new generation of intelligent devices that are both more capable and more sustainable.


Artificial intelligence in cybersecurity and privacy: A blessing or a curse?

AI helps cybersecurity and privacy professionals in many ways, enhancing their ability to protect systems, data, and users from various threats. For instance, it can analyse large volumes of data, spot anomalies, and identify suspicious patterns for threat detection, which helps to find unknown or sophisticated attacks. AI can also defend against cyber-attacks by analysing and classifying network data, detecting malware, and predicting vulnerabilities. ... The harmful effects of AI may be fewer than the positive ones, but they can have a serious impact on organisations that suffer from them. Clearly, as AI technology advances, so do the strategies for both protecting and compromising digital systems. Security professionals should not ignore the risks of AI, but rather prepare for them by using AI to enhance their capabilities and reduce their vulnerabilities. ... As attackers are increasingly leveraging AI, integrating AI defences is crucial to stay ahead in the cybersecurity game. Without it, we risk falling behind.” Consequently, cybersecurity and privacy professionals, and their organisations, should prepare for AI-driven cyber threats by adopting a multi-faceted approach to enhance their defences while minimising risks and ensuring ethical use of technology.


Intel is betting big on its upcoming Lunar Lake XPUs to change how we think of AI in our PCs

Designed with power efficiency in mind, the Lunar Lake architecture is ideal for portable devices such as laptops and notebooks. These processors balance performance and efficiency by integrating Performance Cores (P-cores) and Efficiency Cores (E-cores). This combination allows the processors to handle both demanding tasks and less intensive operations without draining the battery. The Lunar Lake processors will feature a configuration of up to eight cores, split equally between P-cores and E-cores. This design aims to improve battery life by up to 60 per cent, positioning Lunar Lake as a strong competitor to ARM-based CPUs in the laptop market. Intel anticipates that these will be the most efficient x86 processors it has ever developed. ... A major highlight of the Lunar Lake processors is the inclusion of the new Xe2 GPUs as integrated graphics. These GPUs are expected to deliver up to 80 per cent better gaming performance compared to previous generations. With up to eight second-generation Xe-cores, the Xe2 GPUs are designed to support high-resolution gaming and multimedia tasks, including handling up to three 4K displays at 60 frames per second with HDR.


Cyber Threats And The Growing Complexity Of Cybersecurity

Irvine envisions a future where the cybersecurity industry undergoes significant disruption, with a greater emphasis on data-driven risk management. “The cybersecurity industry is going to be disrupted severely. We start to think about cybersecurity more as a risk and we start to put more data and more dollars and cents around some of these analyses,” she predicted. As the industry matures, Dr. Irvine anticipates a shift towards more transparent and effective cybersecurity solutions, reducing the prevalence of smoke and mirrors in the marketplace. She also claims that “AI and LLM's will take over jobs. There will be automation, and we're going to need to upskill individuals to solve some of these hard problems. It's just a challenge for all of us to figure out how.” Kosmowski also remarked that the industry must remain on top of what will continue to be a definitive risk to organizations, “Over 86% of companies are hybrid and expect to remain hybrid for the foreseeable future, plus we know IT proliferation is continuing to happen at a pace that we have never seen before.”


The blueprint for data center success: Documentation and training

In any data center, knowledge is a priceless asset. Documenting configurations, network topologies, hardware specifications, decommissioning regulations, and other items mentioned above ensures that institutional knowledge is not lost when individuals leave the organization. So, no need to panic once the facility veteran retires, as you’ll already have all the information they have! This information becomes crucial for staff, maintenance personnel, and external consultants to understand every facet of the systems quickly and accurately. It provides a more structured learning path, facilitates a deeper understanding of the data center's infrastructure and operations, and allows facilities to keep up with critical technological advances. By creating a well-documented environment, facilities can rest assured knowing that authorized personnel are adequately trained, and vital knowledge is not lost in the shuffle, contributing to overall operational efficiency and effectiveness, and further mitigating future risks or compliance violations.


Why Knowledge Is Power in the Clash of Big Tech’s AI Titans

The advanced AI models currently under development across big tech -- models designed to drive the next class of intelligent applications -- must learn from more extensive datasets than the internet can provide. In response, some AI developers have turned to experimenting with AI-generated synthetic data, a risky proposition that could potentially put an entire engine at risk if even a small semblance of the learning model is inaccurate. Others have pivoted to content licensing deals for access to useful, albeit limited, proprietary training data. ... The real differentiating edge lies in who can develop a systemic means of achieving GenAI data validation, integrity, and reliability with a certificated or “trusted” designation, in addition to acquiring expert knowledge from trusted external data and content sources. These two twin pillars of AI trust, coupled with the raw computing and computational power of new and emerging data centers, will likely be the markers of which big tech brands gain the immediate upper hand.


Should Sustainability be a Network Issue?

The beauty of replacing existing network hardware components with energy-efficient, eco-friendly, small form factor infrastructure elements wherever possible is that no adjustments have to be made to network configurations and topology. In most cases, you're simply swapping out routers, switches, etc. The need for these equipment upgrades naturally occurs with the move to Wi-Fi 6, which requires new network switches, routers, etc., in order to run at full capacity. Hardware replacements can be performed on a phased plan that commits a portion of the annual budget each year for network hardware upgrades ... There is a need in some cases to have discrete computer networks that are dedicated to specific business functions, but there are other cases where networks can be consolidated so that resources such as storage and processing can be shared. ... Network managers aren’t professional sustainability experts—but local utility companies are. In some areas of the U.S., utility companies offer free onsite energy audits that can help identify areas of potential energy and waste reduction.



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain

Daily Tech Digest - July 06, 2024

A CISO's Guide to Avoiding Jail After a Breach

The key to avoiding trouble as a security leader, Nall says, is awareness of three things: how government investigations work, how the government interacts with companies during the process, and the incentives companies have to resolve their cases in one way or another. When push comes to shove, for example, companies will be pressured to name and shame individuals. In his proceedings, Sullivan's legal team painted a picture of a company (Uber) trying to rebrand itself, and holding him up as a lamb to the slaughter. "It's very unfortunate because the consequences are faced by one individual, or a few individuals, although the ability to make sure that [an incident] doesn't happen is a community-based effort within organizations," says ArmorCode's Karthik Swarnam, formerly chief information security officer (CISO) of Kroger, DIRECTV, and TransUnion. To avoid being singled out (and because it's good security practice), CISOs should focus on building clear and robust lines of communication that bring other board members into the cybersecurity decision-making process.


How Pearson’s CIO manages technical debt

Keen to address this, Wells and the Pearson technology working group, which includes tech leadership from across the brand’s different organizations, came up with 12 key attributes, including security and maintainability, to rate their technology assets in a consistent way. These tech debt audits provided a clearer picture of where their biggest risks were, which, in turn, allowed them to prioritize what needs to be addressed first. “We developed an algorithm to measure our different applications based on these 12 categories so we can eliminate technical debt via a more strategic and standardized approach,” she says, noting that the goal was to do away with any guesswork and make decisions based on opportunities and potential revenue risks. ... As part of the process, she and her team needed to get the various leaders from across the business on board by making sure they understood that technical debt isn’t just a technology problem. “We really had to communicate that this is a priority, but we couldn’t do so by only talking to them about technology,” she says.


Strategic alignment in the age of AI: The 7 foundations of competitive success

The strategy must align with the capabilities of the organization and the competitive reality of the environment. Such an alignment has never been more important, as artificial intelligence (AI) and other changes disrupt industries and sectors. Before rushing to adopt the latest AI tool, whether it is deep learning or large language models, organizations must assess whether the new tech is strategically aligned. ... Aligning people with the desired strategic position and vision for the organization is critical. In high-performing organizations, employees and members understand their strategic mission and vision and are dedicated to achieving it. They become acolytes of their leaders and passionate advocates for their organizations. They see how their role contributes to the strategy of the organization and execute with a sense of purpose and teamwork. How many of your employees can articulate how your AI efforts advance your strategy? ... In truth, strategic alignment may be rare. If you are fortunate, you can recall a situation where alignment occurred, allowing you and your organization to achieve incredible heights. 


The AI Revolution Will Not Be Monopolized

Open source in AI and machine learning is not just about software, it's about the synergy of code and data. The growing ecosystem of open-source models encompasses everything from code to data and weights, making powerful tools widely accessible. ... The term "large language models" (LLMs) is often used broadly and imprecisely, muddying discussions about their capabilities and applications. The distinction between encoder models and large generative models is therefore very important. Encoder models involve task-specific networks that predict structured data, while large generative models rely on prompts to produce free-form text, necessitating additional logic to extract actionable insights. ... Companies like OpenAI might dominate the market for user-facing products but not necessarily the AI and software components behind them. While user data is advantageous for improving human-facing products, it is less critical for enhancing the foundational machine-facing tasks. Gaining general knowledge doesn't require specific data, which is at the core of the innovation behind large generative models.


CISA Warns Chemical Facilities of Data Theft After Hacker Breached CSAT Security Tool via Ivanti

CISA says that all information in the CSAT tool was encrypted using AES 256 algorithm, and the keys were also inaccessible “from the type of access the threat actor had to the system.” The agency also found “no evidence of credentials being stolen.” However, impacted organizations should assume data theft “out of abundance of caution” and assume that “that this information could have been inappropriately accessed,” the agency said. The agency also stated that even without data theft, the intrusion “met the threshold of a major incident under the Federal Information Security Modernization Act (FISMA),” given the number of individuals and chemical facilities impacted. Subsequently, CISA directed impacted chemical facilities to maintain cyber and physical security measures to prevent potential attacks as a result of the cyber incident. Similarly, CISA encourages individuals who had CSAT accounts to reset their passwords for all online accounts that share the same password to prevent future password spraying attacks.


Autonomous Vehicles Can Make All Cars More Efficient

To illustrate how the technology works, the team installed a traffic signal along the demonstration pathway. Gankov says an actual traffic-light timer from a traffic-signal cabinet was connected to a TV screen, providing a visual for attendees. A dedicated short range communications (DRSC) radio was also attached, broadcasting the signal’s phase and timing information to the vehicle. This setup enabled the vehicle to anticipate the traffic light’s actions far more accurately than a human driver could. ... These autonomous driving strategies can lead to significant energy savings, benefiting not just the autonomous vehicles themselves, but also the entire traffic ecosystem. “In a regular traffic situation, autonomous vehicles operating in ecomode influence the driving behavior of all the cars behind them,” says Gankov. “The result is that even vehicles with Level 0 autonomy use fuel more sparingly.” ... Employing techniques like efficient highway merging were key strategies in their approach to making the most of each tank of fuel or battery charge. 


SyncBack is the best free backup software on Windows and everyone should use it

On top of changing the type of sync or backup you want to perform, SyncBack also has a ton of configuration options for almost everything about how it works. By default, each profile you create is a manual backup, so whenever you want to backup your files, you just run the profile. Even then, if you want to run it on demand, you can create a hotkey to quickly run a specific profile and run it at any time. But if you're a "set it and forget it" type of person, then you can also automate the backups. SyncBack uses the Task Scheduler in Windows to allow you to create a scheduled backup with whatever frequency you prefer. ... Since each profile is only meant to sync one folder, if you want to sync files in completely different locations, you'll need separate profiles, but you can create group profiles so that all the profiles within are run at the same time, rather than sequentially. You can also enable things like compression for files copied, decrypt files when they're copied, change whether you want files to be copied when syncing two folders or if the mismatched file should be deleted, and even enable a rudimentary form of ransomware detection. 


From Scalability to Speed: Generative AI has Put Testing on Steroids

In the past, testing quality has been a big concern, necessitating early integration of QA into the development life cycle. Now, with GenAI, the focus has advanced beyond simply assurance to actively engineering quality. The key distinction lies in the approach — classic AI involves human intervention and manual processes, while GenAI automates and innovates testing methodologies. Consider dealing with requirement quality early in the software development life cycle. Using classic AI, a business analyst might define requirements to cover various interpretations, which may lead to certain ambiguity and potential failures. ... using GenAI is not about replacing the human workforce, but enhancing our capabilities. The shortage of senior automation testers results in a loss in business revenue. However, with GenAI, junior engineers can now harness the power of gen AI-enabled automation, performing tasks with the built-in knowledge of a seasoned architect. Gen AI’s prowess is not arbitrary; it has learned from billions of data points. By combining traditional knowledge with AI capabilities, new solutions bring scalability and speed to testing.
 

How organisations can thrive with resilience and empathy

While many organisations embrace the value of empathy, they often fall short in delivering it with genuine sincerity. Superficial expressions of empathy without meaningful actions, including consistent recognition, lead to employee dissatisfaction and high turnover rates. However, demonstrating sincere empathy and providing meaningful recognition can be challenging. Leaders sometimes face criticism for being "too considerate," particularly when their decisions appear to benefit employees during setbacks disproportionately. This dynamic can result in empathy fatigue, where the constant demand for empathetic responses and recognition strains HR professionals and leaders. ... Change can only occur when an organisation adopts the principles of nimble resilience and empathy, using them to shape policies, programs, and workplace culture. This approach encourages employees to build relationships, find new solutions, work collaboratively across disciplines, and embrace a forward-thinking perspective. As a result, trust in leaders and team members increases, along with connections to the organisation and its purpose. 


Mind the Gap: The Product in Data Product Is Reliability

The data product concept has been fleshed out in recent years with definitions, reference architectures, and platforms. They consist of … actually, let’s not worry about what data products consist of. At least, not right now. That’s not the important part. Instead, let’s start where we should always start: the consumer. ... Your assurance that its contents are always correct is the most significant distinguishing characteristic of a data product. You provide the ongoing validation, certification, and research so that your users don’t have to. You ensure that the data product is kept current with new arriving data. You continuously monitor its data quality. In addition to content, you must also be concerned with semantics. Changes in the business as implemented in the source systems and propagated through the data may necessitate changes to the data product. ... Technology can facilitate, but technology alone is not remotely sufficient. I’ve seen the data product label slapped on data marts, summary tables, and even raw data with none of the curation or monitoring. 



Quote for the day:

"The only limit to our realization of tomorrow will be our doubts of today." -- Frank D Roosevelt