Daily Tech Digest - August - 25, 2024

Never summon a power you can’t control

Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence. As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien. Many people try to measure and even define AI using the metric of “human-level intelligence”, and there is a lively debate about when we can expect AI to reach it. This metric is deeply misleading. It is like defining and evaluating planes through the metric of “bird-level flight”. AI isn’t progressing towards human-level intelligence. It is evolving an alien type of intelligence. Even at the present moment, in the embryonic stage of the AI revolution, computers already make decisions about us – whether to give us a mortgage, to hire us for a job, to send us to prison. Meanwhile, generative AIs like GPT-4 already create new poems, stories and images. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch wate


Artificial Intelligence: To regulate or not is no longer the question

First, existing laws have been amended to support the use of AI, thereby enabling the economy to benefit from broader AI adoption. The Copyright Act 2021, for example, has been amended to clarify that copyrighted material may be used for machine learning provided that the model developer had lawful access to the data. Amendments to the Personal Data Protection Act (PDPA) 2012 enabled the re-use of personal data to support research and business improvement, after model development using anonymised data proved to be inadequate. Detecting fraud, preserving the integrity of systems and ensuring physical security of premises are also recognised as legitimate interests for using personal data in AI systems. Second, regulatory guidance has been issued on how existing regulations that protect consumers will also apply to AI systems. The Personal Data Protection Commission has issued a set of advisory guidelines on how the PDPA 2012 will apply at different stages of model development and deployment whenever personal data is used. It also clarifies the level of transparency expected from organisations deploying AI systems and how they may disclose relevant information to boost consumer trust and confidence. 


When You're Building The Future The Past Is No Longer A Guide

Artificial Intelligence (AI) definitely has its place. But when it comes to these specific industrial and manufacturing challenges, it tends to be fundamental engineering and physics that generate the answers – number crunching and data processing in the extreme. That, in turn, means that the engineers working to deliver more detailed test results, more realistic prototypes, and run ever more fine-grained simulations turn to some of the most powerful high-performance computing systems to power their workloads. What might have counted as a system capable of High Performance Computing (HPC) a decade, or even a few years ago, can quickly run out of steam. Computational fluid dynamics (CFD) applications often use thousands of CPU cores, points out Gardinalli. But it’s not purely a question of throwing raw power – and dollars – at the issue. The real conundrum is how to map to a wide range of different domains which all require different underlying infrastructure. Finite element analysis (FEA), for example, focuses on working out how materials and structures will act under stress. It’s therefore critical to public infrastructure as well as to vehicle design and crash simulation. 


Top companies ground Microsoft Copilot over data governance concerns

Asked how many had grounded a Copilot implementation, Berkowitz said it was about half of them. Companies, he said, were turning off Copilot software or severely restricting its use. "Now, it's not an unsolvable problem," he added. "But you've got to have clean data and you've got to have clean security in order to get these systems to really work the way you anticipate. It's more than just flipping the switch." While AI software also has specific security concerns, Berkowitz said the issues he was hearing about had more to do with internal employee access to information that shouldn't be available to them.  Asked whether the situation is similar to the IT security challenge 15 years ago when Google introduced its Search Appliance to index corporate documents and make them available to employees, Berkowitz said: "It's exactly that." Companies like Fast and Attivio, where Berkowitz once worked, were among those that solved the enterprise search security problem by tying file authorization rights to search results. So how can companies make Copilots and related AI software work? "The biggest thing is observability and not from a data quality viewpoint, but from a realization viewpoint," said Berkowitz. 


Five incorrect assumptions about ISO 27001

We wish there were such a thing as an impenetrable cyber barrier. Unfortunately, there isn’t—not even at the highest levels. For any IT system to be effective, information must be sent and received from external sources. These days, vast amounts of data get copied and transferred every second, moving around the world at lightspeed. As a result, there are always multiple potential access points for criminals to get in. ISO 27001 – and any good cybersecurity strategy – can’t offer 100% protection against cyber threats. However, they can significantly mitigate the risks associated with these attacks. A correctly applied ISMS will make you more likely to keep any malware or bad actors out. ... ISO 27001 isn’t a one-time thing. Unfortunately, nothing is in information security – or business in general. The initial implementation is the most time-consuming aspect and may require the most significant financial investment. But once it’s in place, there’s no time to sit back and relax. Your staff will immediately switch focus to using pre-agreed KPIs to analyse your ISMS’s effectiveness, suggesting and making strategic adjustments as relevant.


How we’re using ‘chaos engineering’ to make cloud computing less vulnerable to cyber attacks

Chaos engineering involves deliberately introducing faults into a system and then measuring the results. This technique helps to identify and address potential vulnerabilities and weaknesses in a system’s design, architecture, and operational practices. Methods can include shutting down a service, injecting latency (a time lag in the way a system responds to a command) and errors, simulating cyberattacks, terminating processes or tasks, or simulating a change in the environment in which the system is working and in the way it’s configured. n recent experiments, we introduced faults into live cloud-based systems to understand how they behave under stressful scenarios, such as attacks or faults. By gradually increasing the intensity of these “fault injections”, we determined the system’s maximum stress point. ... Chaos engineering is a great tool for enhancing the performance of software systems. However, to achieve what we describe as “antifragility” – systems that could get stronger rather than weaker under stress and chaos – we need to integrate chaos testing with other tools that transform systems to become stronger under attack.


Six pillars for AI success: how the C-suite can drive results

Many AI and GenAI solutions have common patterns and benefit from reusable assets that can accelerate time to value and reduce costs. Without a control tower, different groups across an enterprise are at risk of building very similar things from scratch for various use cases. The control tower effectively has authority over where an organization will make its investments and create value by identifying patterns across the various use cases that align with business needs and prioritizing the development of GenAI solutions, for example. ... The truly transformative impact would be to entirely reimagine what you do in the front office, not just streamline the back office. GenAI unlocks new products, services and business models that are easy to overlook if you approach the technology with a robotic process automation mindset. That can include creating new products and features enabled through GenAI, equipping them with connectivity under pay-as-you-go service subscription models, selling them directly to consumers instead of through intermediaries, and leveraging the consumer data for insights and perhaps selling it as a separate revenue stream. 


Cyber Hygiene: The Constant Defense Against Evolving B2B Threats

By partnering with companies that provide early warnings about threats and scams when they see them independently, such as domain spoofing attempts, businesses can stay ahead of potential threats. “That’s an important control, and I strongly recommend it for any company,” Kenneally said, stressing the benefits of collaborative working partnerships. “It’s about ensuring that the controls are in place and that we are partnering with our customers to mitigate risks,” he added. This is particularly relevant given the increasing sophistication of phishing attempts, some of which may be assisted by artificial intelligence. Another aspect of Boost’s strategy is fostering a culture of resilience and agility within the organization. This involves continuous training and education, not just for the IT team but across the entire company. “Training is critical,” Kenneally said. ... As the cybersecurity landscape continues to evolve, the need for companies to protect their digital perimeter becomes more pressing. But while the threats may change, the fundamental principles of good cybersecurity — vigilance, education and proactive planning — remain constant.


I’ve got the genAI blues

Why is this happening? I’m not an AI developer, but I pay close attention to the field and see at least two major reasons they’re beginning to fail. The first is the quality of the content used to create the major LLMs has never been that good. Many include material from such “quality” websites as Twitter, Reddit, and 4Chan. As Google’s AI Overview showed earlier this year, the results can be dreadful. As MIT Technology Review noted, it came up with such poor quality answers as “users [shoud] add glue to pizza or eat at least one small rock a day, and that former US president Andrew Johnson earned university degrees between 1947 and 2012, despite dying in 1875.” Unless you glue rocks into your pizza, those are silly, harmless examples, but if you need the right answer, it’s another matter entirely. Take, for example, the lawyer whose legal paperwork included information from AI-made-up cases. The judges were not amused. If you want to sex chat with genAI tools, which appears to be one of the most popular uses for ChatGPT, accuracy probably doesn’t matter that much to you. Getting the right answers, though, is what matters to me and should matter to anyone who wants to use AI for business.

AI technology brings significant benefits to the Financial Services sector, including enhanced efficiency through automation, improved accuracy in risk assessments, personalised customer experiences via AI-driven insights and faster, more secure fraud detection. It also enables predictive analytics for better decision-making in areas like investment and lending. ... AI is there to support the employee – to elevate the human potential by delivering insights, knowledge and expedite results. However, challenges include the complexity of implementing AI systems, concerns around data privacy and security, regulatory compliance, and potential biases in AI models that can lead to unfair outcomes. Ensuring transparency and trust in AI decisions is also crucial for its broader acceptance in the sector. ... Trustworthy AI also ensures that compliance with regulations is maintained, risks are properly managed and ethical standards are upheld. In a sector where customer relationships are built on trust, any misstep could lead to reputational damage, financial loss, or regulatory penalties. 



Quote for the day:

“A dream doesn't become reality through magic; it takes sweat, determination, and hard work.” -- Colin Powell

Daily Tech Digest - August 24, 2024

India Nears Its Quantum Moment — Completion Of First Quantum Computer Expected Soon

Despite the progress, significant scientific challenges remain. Qubits are inherently unstable and susceptible to disturbances, leading to ‘decoherence’. Researchers worldwide are striving to overcome this through error-corrected qubits. “You have to show that by using such a system, you are actually solving some problem which is of relevance to industry or science or society and show that it is better, faster and cheaper,” Dr. Vijayaraghavan told India Today. “That of course will be the first holy grail of useful quantum computers. We are not there yet.” In Bengaluru, startup QpiAI is also venturing into quantum computing. Led by CEO and chairman Dr. Nagendra Nagaraja, the company is constructing a 25-qubit quantum computer, with plans to unveil it by the end of the year, according to the news service. With $6 million in funding, QpiAI intends to offer the platform to customers via cloud services and supply systems to top institutes and research groups across India. “Our vision is to integrate AI and quantum computing in enterprises,” Dr. Nagaraja told India Today


How Seeing Is Believing With Your Leadership Abilities

One of the standout points in my discussion with Cherches was his approach to communicating complex ideas across different functions within an organization. He stresses the importance of translating concepts into the "language" of the audience. Whether through analogies, stories, or visual diagrams, the goal is to make the abstract tangible. Cherches illustrates this by introducing an example. "We need to communicate in the language of our stakeholders. For example, I teach in the HR master's program at NYU, and I always emphasize that if you need funding for an HR initiative, you have to translate that into the language of money for the CFO. It's about finding the right visual and verbal tools to resonate with different audiences." This is where visual leadership shines—bridging gaps between different departments and creating a common language everyone can understand. In today's business environment, where cross-functional and asynchronous collaboration is critical, leaders who can translate their vision into visual terms are more likely to gain buy-in and drive initiatives forward.


5 things I wish I knew as a CFO before starting a digital transformation

One of our biggest missteps was not thoroughly defining what we intended to achieve from different perspectives — IT, employees, customers and the executive team. We knew having to use something new would have pain points, but we didn’t understand the impact of going from a customized environment to a more standard platform. The business didn’t understand the advantages either — their work might be slightly less efficient or different, but the processes would now be scalable, more stable and completely standardized across the different business units. ... In hindsight, we greatly underestimated the effort to cleanse and prepare our data for migration. Now that the project is well on its way, I always hear about the importance of data cleansing and preparation. But I never heard it from anyone upfront. We could have spent a year restructuring data hierarchies to align with the new system before even starting implementation. ... Not every part of the project will be a success or an upgrade. But there will be incredible success stories, efficiencies, new capabilities or insights. Often, they’re unexpected, like the impact that pricing changes had on our business, even though they weren’t in our original scope. 


Linus Torvalds talks AI, Rust adoption, and why the Linux kernel is 'the only thing that matters'

Torvalds said, "There is some stability with old kernels, and we do backport for patches and fixes to them, but some fixes get missed because people don't think they're important enough, and then it turns out they were important enough." Besides, if you stick with an old kernel for too long when you finally need to update to a newer one, it can be a massive pain to do so. So, "to all the Chinese embedded Linux vendors who are still using the Linux 4.9 kernel," Torvalds said, wagging his finger, "Stop." In addition, Hohndel said that when patching truly ancient kernels, the Linux kernel team can only say, "Sorry, we can't help you with that. It was so long ago that we don't even remember how to fix it." Switching to a more modern topic, the introduction of the Rust language into Linux, Torvalds is disappointed that its adoption isn't going faster. "I was expecting updates to be faster, but part of the problem is that old-time kernel developers are used to C and don't know Rust. They're not exactly excited about having to learn a new language that is, in some respects, very different. So there's been some pushback on Rust."


EU AI Act Tightens Grip on High-Risk AI Systems: Five Critical Questions for U.S. Companies

the EU AI Act applies to U.S. companies across the entire AI value chain that develop, use, import, or distribute AI Systems in the EU market. Further, a U.S. company is subject to the EU AI Act where it operates AI Systems that produce output used in the EU market. In other words, even if a U.S. company develops or uses a “High-Risk” AI System for job screening or online proctoring purposes, the EU AI Act still governs if outputs produced by such AI System are used in the EU for recruiting or admissions purposes. In another use case, if a U.S. auto OEM incorporates an AI system to support self-driving functionalities and distributes the vehicle under its own brand in the EU, such OEM is subject to the EU AI Act. ... In addition, for those AI systems classified as “High-Risk” under the “Specific Use Cases” in Annex III, they must also complete a conformity assessment to certify that such AI systems comply with the EU AI Act. Where AI Systems are themselves “Regulated Productsor related safety components,” the EU AI Act seeks to harmonize and streamline the processes to reduce market entrance costs and timelines. 


ServiceOps: Balancing Speed and Risk in DevOps

The integration between ITSM and AIOps tools automates identification of risky changes by analyzing risk information from the service history and operational data in a single pane of glass. AI models correlate past changes and determine their impact on operational variables such as service availability and health. This information decreases time spent on change requests by helping teams quickly understand the risk factors and the scope of impact by using powerful service dependency maps from AIOps tools. This AI-driven assessment also provides great feedback to DevOps and SRE teams, enabling them to deploy faster and with greater confidence. ... A conversational interface for change risk assessment can make risk insights understandable and actionable for teams tasked with delivering high-quality software rapidly. Imagine giving teams tasked with approving software changes access to a chat-based interface for asking questions and getting answers tailored to the specific environments where their software will be deployed. They could get answers to questions like, “What are the risky changes?” and “Can I look at change collisions?” The pace of change driven by DevOps presents significant challenges to IT service and IT operations teams. Both need to accelerate change without risking downtime. 


AI Assistants: Picking the Right Copilot

Not all assistants are meant for tech professionals. Others with a focus on consumer benefits are emerging. ... A good AI assistant should offer a responsive chat feature to indicate its understanding of its environment. Jupyter, Tabnine, and Copilot all offer a native chat UI for the user. The chat experience influences how well a professional feels the AI assistant is working. How well it interprets prompts and how accurate the suggestions are all start with the conversational assistant experience, so technical professionals should note their experiences to see which assistant works best for their projects. Professionals should also consider the frequency of the work in which the AI assistant is being applied. The frequency can indicate the degree of value being created — more frequency gives an AI assistant an opportunity to learn user preferences and past account history, which plays into its recommendations. The result is better productivity with AI, learning quickly where to best explore and experiment with crafting applications. Considering solution frequency can also reveal the cost of the technology against the value received. 


Researchers propose a smaller, more noise-tolerant quantum factoring circuit for cryptography

The MIT researchers found a clever way to compute exponents using a series of Fibonacci numbers that requires simple multiplication, which is reversible, rather than squaring. Their method needs just two quantum memory units to compute any exponent. "It is kind of like a ping-pong game, where we start with a number and then bounce back and forth, multiplying between two quantum memory registers," Vaikuntanathan adds. They also tackled the challenge of error correction. The circuits proposed by Shor and Regev require every quantum operation to be correct for their algorithm to work, Vaikuntanathan says. But error-free quantum gates would be infeasible on a real machine. They overcame this problem using a technique to filter out corrupt results and only process the right ones. The end-result is a circuit that is significantly more memory-efficient. Plus, their error correction technique would make the algorithm more practical to deploy. "The authors resolve the two most important bottlenecks in the earlier quantum factoring algorithm. Although still not immediately practical, their work brings quantum factoring algorithms closer to reality," adds Regev.


Power of communication in leadership transition

When change is on the horizon, the worst thing a leader can do is ignore or suppress employees' natural reactions. Uncertainty leads to rumours and speculation. Instead, leaders should create an environment of open communication, where teams feel comfortable voicing their concerns, asking questions, and sharing their thoughts on the new leader’s vision and the upcoming changes. Being honest and transparent is key to building trust. Open communication can help ease fears, address worries, and empower employees to embrace changes and contribute to the organisation’s success. It’s important to clearly explain what is happening, why it’s happening, and how it may affect different roles. Avoiding the temptation to sugar-coat negative news is also crucial. Listening is just as important as speaking. Leaders should avoid getting defensive or dismissive when employees share their concerns. ... To effectively reassure employees, leaders need to understand the root causes of their anxiety. Whether concerns are about job security, changes in responsibilities, or shifts in the company’s culture, employees need to know that their concerns are being heard and taken seriously.


What are the most in-demand skills in tech right now?

Martyn said that while there are many approaches to gain new skills, she advises learners to understand the areas where they have a natural aptitude and explore their preferred learning style. “With the right attitude and an understanding of their natural aptitude, I recommend reaching out for support to a leader or coach to support in the creation of a formal learning and development plan starting with some small learning objectives and building over time,” she said. “The technical, business and cognitive skills required for success will evolve over time but putting the right routines in place to consistently retrospect on your skill level, generate new ideas, identify opportunities for learning and execute a learning plan is a fundamental skill that will support continuous growth in the long term.” Pareek said that mastery of digital technologies such as AI and data analytics is becoming increasingly important both in specialist roles and more generally, so adaptability and resilience is key. “Building a robust professional network and engaging in collaboration can unlock new opportunities, while mentorship provides valuable guidance. ...”



Quote for the day:

"One of the sad truths about leadership is that, the higher up the ladder you travel, the less you know." -- Margaret Heffernan

Daily Tech Digest - August 23, 2024

Generative AI is sliding into the ‘trough of disillusionment’

“Even as AI continues to grab the attention, CIOs and other IT executives must also examine other emerging technologies with transformational potential for developers, security, and customer and employee experience and strategize how to exploit these technologies in line with their organizations’ ability to handle unproven technologies,” Chandrasekaran said. ... Autonomous AI software was among four emerging technologies called out in the report because it can operate with minimal human oversight, improve itself, and become effective at decision-making in complex environments. “These advanced AI systems that can perform any task a human can perform are beginning to move slowly from science fiction to reality,” Gartner said in its report. “These technologies include multiagent systems, large action models, machine customers, humanoid working robots, autonomous agents, and reinforcement learning.” Autonomous agents are currently heading up the slope to the peak of inflated expectations. Just ahead of autonomous agents on that slope is artificial general intelligence, currently a hypothetical form of AI where a machine learns and thinks like a human does.


As Fintechs Stumble, A New Breed of ‘TechFins’ Move to the Fore

TechFins have provided many points of value in recent years, but particularly in 2024 and in the near future, they will highly benefit financial institutions in the areas of: Leveraging the power of transaction data cleansing and analysis; Artificial intelligence (AI); Fraud prevention and cost mitigation; Extending the personalized user experience and reliability of the digital banking application; Transforming digital banking platforms into a digital sales and service platform; Increasing revenues and lowering costs for financial institutions. With financial institutions amassing high volumes of transaction data within their ecosystems, processing and analyzing that data is becoming a greater priority. According to the Pragmatic Institute, data practitioners spend 80% of their valuable time finding, cleaning, and organizing the data. This leaves only 20% of their time to actually perform analysis on it. This is the 80/20 rule, also known as the Pareto principle. TechFins can provide vital support to financial institutions’ data teams through transaction cleansing, leaving them more time to build campaigns and take action on the data. 


The Developer Crisis: Mental Health, Burnout, and Retention

Seven​​ out of 10 developers state that job satisfaction is the most important factor. Unplanned extra tasks and excessive overtime will have developers looking for the door. Businesses need to make it clear to both existing and new hires that they will do everything they can to respect these boundaries. Developers encounter constant roadblocks in their work, so time is precious. To help devs maintain a “flow state” (total focus on the task), businesses should consider re-evaluating their calendars to reduce unnecessary meetings. If not implemented, software development frameworks could help dev teams better organize their work and progress through projects faster. As with any operational change, feedback is critical. ... By freeing developers from burdensome backend duties, they can stay creative and focus on developing innovative new frontend solutions to improve a customer’s overall experience. This makes brilliant business sense, particularly in the case of e-commerce, where standard feature developments, which would otherwise take up tons of developer resources, can be handled much more efficiently by a tech platform.


Vulnerability prioritization is only the beginning

Scrutiny of cybersecurity processes and performance is ratcheting up due to the dual hammers of increased regulatory scrutiny and the brutal trend of highly damaging attacks. The US Securities and Exchange Commission, the European Union, the US Department of Defense, the British National Government, and the US Cybersecurity and Infrastructure Agency have all put or are putting in place significantly more stringent requirements for CISOs and their teams. Both the SEC and CISA have moved to push accountability to the Board of Directors and the C-Suite. This means that metrics alone are no longer sufficient for CISOs that want to provide full transparency. Process transparency has become just as critical to validate KPIs and allow auditors and the government to peer inside what were formerly security process “bottlenecks”. These bottlenecks are highly variable, human-centric processes, such as opening or closing a Jira ticket, back and forth commenting in a Slack thread, pushing a pull request on GitHub, or running a CI/CD pipeline to test and redeploy software after a patch. All can have human path dependencies, injecting uncertainty and variability.


Authentication and Authorization in Red Hat OpenShift and Microservices Architectures

Moving up the layers and looking at the blue layer (that is, interacting with OpenShift or Kubernetes in general) means communicating to the Kubernetes API server. This is true for both human and non-human users, whether they're using a GUI console or a terminal. Ultimately, all interaction with OpenShift or Kubernetes goes through the API server. The OAuth2/OIDC combination makes perfect sense for API authentication and authorization, so OpenShift features a built-in OAuth2 server. As part of the configuration of this OAuth2 server, an supported identity provider must be added. The identity provider helps the OAuth2 server confirm who the user is. Once this part has been configured, OpenShift is ready to authenticate users. For an authenticated user, OpenShift creates an access token and returns that token to the user. This token is called an OAuth access token. ... Users and Service Accounts can be organized into groups in OpenShift. Groups are useful when managing authorization policies to grant permissions to multiple users at once. For example, you can allow a group access to objects within a project instead of granting access to each user individually.


Bridging the digital divide: driving positive impact where it is needed most

There are definitely pros and cons to building rural fiber networks. On the one hand, by nature the construction process is more complex and expensive, but this typically means that there is little to no competition, leading to higher customer demand and lower overbuild risk. The challenges are even more acute in areas that qualify for Project Gigabit subsidies, with barriers including challenging terrain, geography, and geology, which often increases costs and extends timelines. Due to the distances being covered, rural rollouts often also require more permits and wayleaves from multiple landowners, further increasing complexity. Without subsidies, these projects would not be commercially viable, but with cost cover of between 60 percent to 80 percent of capex, a defensive position is created for contract winners, which increases returns for investors, while also supporting some of the most neglected rural communities. In these cases, network commercialization is also likely to be more achievable and we are starting to see a growing evidence base of strong customer cohort penetration in these projects which supports that thesis.


How IT Leaders Can Benefit From Active Listening

Active listening is crucial for effective leadership, says Justice Erolin, CTO at BairesDev, a technology services company. "It strengthens team dynamics, drives innovation, and ensures that all voices are heard," he observes in an email interview. When IT leaders speak, particularly with business stakeholders, they often err by assuming everyone understands the taxonomy and language being used, Chowning observes. "This is frequently the case with technology-related terminology that we understand well, but which business stakeholders might define or understand much differently," she adds. "If we start from unequal or disconnected positions, then we tend to hear something other than what the speaker intended." IT leaders can improve collaboration by understanding team members' perspectives and enhancing problem-solving with deeper insights, Erolin says. It can also help build trust by making team members feel heard. "Ultimately, leaders will be able to make better decisions through diverse viewpoints." Erolin notes that BairesDev incorporates active listening skills into its leadership training program, recognizing the tool's importance in fostering a culture of trust and collaboration.


Embracing Data and Emerging Technologies for Quality Management Excellence

Traditionally, quality management has been seen through a compliance lens – a necessary business cost to meet regulations. To unleash QM’s power as a catalyst for ongoing business growth and customer satisfaction, a fundamental mindset shift toward a more comprehensive, proactive approach is crucial. In the past, quality reporting and data tracking were reactive, addressing issues after they occurred. This fuels a fix-it-later cycle instead of prevention. The needed cultural change is from reactive to proactive QM. Forward-thinking firms use AI and predictive analytics to foresee problems before they arise, emphasizing prevention and continuous improvement. However, some companies remain regulation-focused due to deep-rooted challenges. Breaking this mold requires realigning toward customer-centricity, building robust systems that prioritize satisfaction and ongoing enhancement, with regulatory compliance as a natural outcome. It’s key to see quality and regulatory goals as aligned to each other and drivers of commercial growth, not conflicting with each other and inhibitors to commercial growth.


The reality of AI-centric coding

“Although AI is able to solve many college problem sets and handle small-to-medium snippets of code generation, it still struggles with complex logic, large code bases, and especially novel problems without precedent in the training data. Hallucinations and errors remain significant issues that require expert engineering oversight and correction,” Nag said. “These tools are far better at quick prototypes from scratch rather than iterating large applications, which is the bulk of engineering. Much of the context that drives large applications doesn’t actually exist in the code base at all.” Tom Taulli, who has authored multiple AI programming books, including this year’s AI-Assisted Programming: Better Planning, Coding, Testing, and Deployment, agreed that the move to great GenAI coding efforts will catch most enterprises off guard. "These tools will mean a change in traditional workflows, approaches, and mindset. Consider that they are pretrained, so they are often not updated for the latest frameworks and libraries. Another issue is the context window. Code bases can be massive. But even the most sophisticated LLMs cannot handle the huge amount of code files in the prompts,” Taulli said. 


Is AI Making Banking Safer or Just More Complicated?

The rise of AI in fraud detection has been a game changer. Through real-time analysis, machine learning and pattern recognition, AI tools can flag unusual transactions and often catch fraud before it occurs. AI's capabilities in anomaly detection allow financial institutions to be proactive, staying ahead of cybercriminals. But AI has its flaws. One of the most significant issues is the high rate of false positives. John MacInnes, a retired professor from Edinburgh, encountered this new reality firsthand. He tried to send 15,000 euros to a friend in Austria, expecting it to be a quick and routine transaction. The process became an ordeal involving the fraud team at Starling Bank. When MacInnes declined to provide personal messages and tax documents to prove the legitimacy of the payment, the bank took drastic action - it froze his account. It wasn't until media wrote about his plight that the bank admitted it went too far and unfroze the account. This incident sheds light on a growing challenge for banks: While caution is understandable, overly aggressive fraud prevention can alienate the very customers they aim to protect.



Quote for the day:

"Difficulties strengthen the mind, as labor does the body." -- Seneca

Daily Tech Digest - August 22, 2024

A Brief History of Data Ethics

The roots of the practice of data ethics can be traced back to the mid-20th century when concerns about privacy and confidentiality began to emerge alongside the growing use of computers for data processing. The development of automated data collection systems raised questions about who had access to personal information and how it could be misused. Early ethical discussions primarily revolved around protecting individual privacy rights and ensuring the responsible handling of sensitive data. One pivotal moment came with the enactment of the Fair Information Practice Principles (FIPPs) in the United States in the 1970s. These principles, which emphasized transparency, accountability, and user control over personal data, laid the groundwork for modern data protection laws and influenced ethical debates globally. ... Ethical guidelines such as those proposed by the European Union’s General Data Protection Regulation (GDPR) emphasize the importance of informed consent, limiting the collection of data to its intended use, and data minimization. All these concepts are part of an ethical approach to data and its usage. 


Collaborative AI in Building Architecture

As a design practice fascinated by the practical deployment of AI, we can’t help but be reminded of the early days of the personal computer, as this also had a high impact on the design of workplace. Back in the 1980s, most computers were giant, expensive mainframes that only large companies and universities could afford. But then, a few visionary companies started putting computers on desktops, from workplaces, to schools and finally homes. Suddenly, computing power was accessible to everyone but it needed different spaces. ... As with any powerful new tool, AI also brings with it profound challenges and responsibilities. One significant concern is the potential for AI to perpetuate or even amplify biases present in the data it is trained on, leading to unfair or discriminatory outcomes. AI bias is already prevalent and it is crucial we learn how to teach AI to discern bias. Not so easy. AI could also be used maliciously, e.g. to create deepfakes or spread misinformation. There are also legitimate concerns about the impact of AI on jobs and the workforce, but equally how it improves and inspires that workforce.


The Deeper Issues Surrounding Data Privacy

Corporate legal departments will continue to draft voluminous agreement contracts packed with fine print provisions and disclaimers. CIOs can’t avoid this, but they can make a case to clearly present to users of websites and services how and under what conditions data is collected and shared. Many companies are doing this—and are also providing "Opt Out" mechanisms for users who are uncomfortable with the corporate data privacy policy. That said, taking these steps can be easier said than done. There are the third-party agreements that upper management makes that include provisions for data sharing, and there is also the issue of data custody. For instance, if you choose to store some of your customer data on a cloud service and you no longer have direct custody of your data, and the cloud provider experiences a breach that comprises your data, whose fault is it? Once again, there are no ironclad legal or federal mandates that address this issue-but insurance companies do tackle it. “In a cloud environment, the data owner faces liability for losses resulting from a data breach, even if the security failures are the fault of the data holder (cloud provider),” says Transparity Insurance Services.


A survival guide for data privacy in the age of federal inaction

First, organizations should map or inventory their data to understand what they have. By mapping and inventorying data, organizations can better visualize, contextualize and prioritize risks. And, by knowing what data you have, not only can you manage current privacy compliance risks, but you can also be better prepared to respond to new requirements. As an example, those data maps can allow you to see the data flows you have in place where you are sharing data – a key to accurately reviewing your third-party risks. In addition to be able to prepare for existing, and new, privacy laws, it also allows organizations to be able to identify their data flows to minimize risk exposure or compromise by being able to better understand where you are distributing your data. Secondly, companies should think through how to operationalize priority areas to embed them in your business. This might be through training of privacy champions and adopting technology to automate privacy compliance obligations such as implementing an assessments program that allows you to better understand data-related impact.


The Struggle To Test Microservices Before Merging

End-to-end testing is really where the rubber meets the road, and we get the most reliable tests when sending in requests that actually hit all dependencies and services to form a correct response. Integration testing at the API or frontend level using real microservice dependencies offers substantial value. These tests assess real behaviors and interactions, providing a realistic view of the system’s functionality. Typically, such tests are run post-merge in a staging or pre-production environment, often referred to as end-to-end (E2E) testing. ... What we really want is a realistic environment that can be used by any developer, even at an early stage of working on a PR. Achieving the benefits of API and frontend-level testing pre-merge would save effort on writing and maintaining mocks while testing real system behaviors. This can be done using canary-style testing in a shared baseline environment, akin to canary rollouts but in a pre-production context. To clarify that concept: We want to try running a new version of code on a shared staging environment, where that experimental code won’t break staging for all the other development teams, the same way a canary deploy can go out, break in production and not take down the service for everyone.


Neurotechnology is becoming widespread in workplaces – and our brain data needs to be protected

Neurotechnology has long been used in the field of medicine. Perhaps the most successful and well known example are cochlear implants, which can restore hearing. But neurotechnology is now becoming increasingly widespread. It is also becoming more sophisticated. Earlier this year, tech billionaire Elon Musk’s firm Neuralink implanted the first human patient with one of its computer brain chips, known as “Telepathy”. These chips are designed to enable people to translate thoughts into action. More recently, Musk revealed a second human patient had one of his firm’s chips implanted in their brain. ... These concerns are heightened by a glaring gap in Australia’s current privacy laws – especially as they relate to employees. These laws govern how companies lawfully collect and use their employees’ personal information. However, they do not currently contain provisions that protect some of the most personal information of all: data from our brains. ... As the Australian government prepares to introduce sweeping reforms to privacy legislation this month, it should take heed of these international examples and address the serious privacy risks presented by neurotechnology used in workplaces.


I Said I Was Technically a CISO, Not a Technical CISO

Often a CISO will not come from a technical background, or their technical background is long in their career rearview mirror. Can a CISO be effective today without a technical background? And how do you keep up on your technical chops once you get the role? ... We often talk about the need for a CISO to serve as a bridge to the rest of the business, but a CISO’s role still needs to be grounded in technical proficiency, argues Jeff Hancock, who’s the CISO over at Access Point Technology in a recent LinkedIn post. Now, many CISOs come from a technical background, but it becomes hard to maintain once you’re in a CISO role. Geoff says that while no one can be a master in all technical disciplines, CISOs should make a goal of selecting a few to retain mastery of over a long-term plan. Now, Andy, I’ll say, does this reflect your experience? Is this a matter of credibility with the rest of the security team, or does a technical understanding allow a CISO to do their job better? As you were a CISO, how much of your technical skills were sort of staying intact?


API security starts with API discovery

Because APIs tend to change quickly, it’s essential to update the API inventory continuously. A manual change-control process can be used, but this is prone to breakdowns between the development and security teams. The best way to establish a continuous discovery process is to adopt a runtime monitoring system that discovers APIs from real user traffic, or to require the use of an API gateway, or both. These options yield better oversight of the development team than relying on manual notifications to the security team as API changes are made. ... Threats can arise from outside or inside the organization, via the supply chain, or by attackers who either sign up as paying customers, or take over valid user accounts to stage an attack. Perimeter security products tend to focus on the API request alone, but inspecting API requests and responses together gives insight into additional risks related to security, quality, conformance, and business operations. There are so many factors involved when considering API risks that reducing this to a single number is helpful, even if the scoring algorithm is relatively simple.


3 key strategies for mitigating non-human identity risks

The first step of any breach response activity is to understand if you’re actually impacted; the ability to quickly identify any impacted credentials associated with the third-party experiencing the incident is key. You need to be able to determine what the NHIs are connected to, who is utilizing them, and how to go about rotating them without disrupting critical business processes, or at least understand those implications prior to rotation. We know that in a security incident, speed is king. Being able to outpace attackers and cut down on response time through documented processes, visibility, and automation can be the difference between mitigating direct impact from a third-party breach, or being swept up in a list of organizations impacted due to their third-party relationships. ... When these factors change from baseline activity associated with NHIs they may be indicative of nefarious activity and warrant further investigation, or even remediation, if an attack or compromise is confirmed. Security teams are not only regularly stretched thin, but they also often lack a deep understanding across the organization’s entire application and third-party ecosystem as well as insights into what assigned permissions and associated usage is appropriate.


The Rising Cost of Digital Incidents: Understanding and Mitigating Outage Impact

Causal AI for DevOps promises a bridge between observability and automated digital incident response. By ‘Causal AI for DevOps’ I mean causal reasoning software that applies machine learning (ML) to automatically capture cause and effect relationships. Causal AI has the potential to help dev and ops teams better plan for changes to code, configurations or load patterns, so they can stay focused on achieving service-level and business objectives instead of firefighting. With Causal AI for DevOps, many of the incident response tasks that are currently manual can be automated: When service entities are degraded or failing and affecting other entities that makeup business services, causal reasoning software surfaces the relationship between the problem and the symptoms it is causing. The team with responsibility for the failing or degraded service is immediately notified so they can get to work resolving the problem. Some problems can be remediated automatically. Notifications can be sent to end users and other stakeholders, letting them know that their services are affected along with an explanation for why this occurred and when things will be back to normal. 



Quote for the day:

"Holding on to the unchangeable past is a waste of energy, and serves no purpose in creating a better future." -- Unknown

Daily Tech Digest - August 21, 2024

Use the AI S-curve to drive meaningful technological change

The S-curve is a graphical representation of how technology matures over time. It starts slowly, with early adopters, specialized use cases, and technocrats. As the technology proves its value, it enters a phase of rapid growth where adoption accelerates and becomes more widely integrated into various industries and applications. However, as technology advances, becoming cheaper, faster, and more efficient, it inevitably reaches some logical limit and settles into a natural “top” of the S-curve. When a technology reaches its limit, progress is relatively slow, typically requiring significant increases in complexity. ... As new technologies like AI emerge and mature, organizations must balance the need to stay competitive with the potential risks and uncertainties associated with early adoption. This challenge is not new. In his book, The Innovator’s Dilemma, Clayton Christensen describes the difficult choice companies face between maintaining their existing, profitable business models and investing in new, potentially disruptive technologies. So, how can organizations navigate this decision? One approach is to ensure that there is a dedicated unit that operates on a long takt time, outside the quarterly or annual reporting pressure. 


How to Present the Case for a Larger IT Budget

Outright rejection of a budget expansion request is unlikely, but not impossible. "The important thing is not to take rejection personally -- it's the case that's rejected, not the person who presented it," Biswas says. It's also important to understand that the rejection is not necessarily wrong. "Sometimes, we get too close to our ideas to evaluate them impartially," he explains. Understand, too, that a rejection isn't necessarily forever, since issues that prevented approval can be addressed and resolved to present a more convincing case. It's important to fully understand stakeholders' individual interests as well as their tactical and strategic goals, Hachmann advises. "This approach requires a strong understanding of the respective priorities of each person involved in the budget approval processes." he says. "With this [tactic], you'll be better equipped to align IT initiatives and their costs with the stakeholders' business strategy." IT leaders often make the mistake of generalizing on how a bigger budget will improve IT instead of communicating the ways it will help the business. "IT leaders should be careful they're not perceived as 'empire builders' instead of business leaders who want what's best for the larger organization," Biswas says. 


Beyond Orchestration: A Comprehensive Approach to IaC Strategy

In large organizations, enforcing a single IaC tool across all departments is often impractical. Today, there are a diversity of tools that cater to different stacks, strengths and collaboration with developers — from those that are native to a specific platform (CloudFormation or ARM for Azure), and those for multicloud or cloud native, from Terraform and OpenTofu, to Helm and Crossplane, and those that cater to developers like Pulumi or AWS Cloud Development Kit (CDK). Different teams may prefer different tools based on their expertise, use cases or specific project requirements. A robust IaC strategy must account for:The coexistence of multiple IaC tools within the organization. Visibility across various IaC implementations. Governance and compliance across diverse IaC ecosystems. Ignoring this multi-IaC reality can lead to silos, reduced visibility and governance challenges. ... As DevOps and platform engineers, we’ve developed a platform that we ourselves have needed over many years of managing cloud fleets at scale. A platform that addresses not just tooling and orchestration, but all aspects of a comprehensive IaC strategy can be the difference between 2 a.m. downtime and a good night’s sleep.


DevSecOps Needs Are Shifting in the Cloud-Native Era

A cornerstone activity for any DevSecOps team is to secure secrets — that is, the passwords and access credentials that allow access to services and applications. Marks noted that despite many respondents having tools in place for secret scanning or detection, the highest number of incidents (32%) were from secrets stolen from a repository. The study also included data on frequency of usage of tools. She said that it showed that scanning takes place periodically, including daily, multiple times per week, or weekly, but this was not aligned with code pushes or development processes. "So, this is an area for much improvement as scanning takes resources and time and should align better with developer workflows," Marks said. ...  Having so many tools can introduce such challenges as gaining consistency across development teams, dealing with alert fatigue, or determining which remediations are needed and/or how remediation can mitigate risk. ... "Instead, a third-party platform that can support multiple tools can serve as a governance layer to help orchestrate the usage of needed tools, collect data, and help security teams more efficiently gain the visibility they need, apply the right controls and processes, and determine needed actions."


Exclusive: How Piramidal is using AI to decode the human brain

The company is first fine-tuning its model for the neuro ICU; that product will be able to ingest EEG data and interpret in near-real time, providing outputs to medical staff on occurrence and diagnosis of disorders such as seizures, traumatic brain bleeding, inflammations and other brain dysfunctions. “It is truly an assistant to the doctor,” said Pahuja, noting that the model can ideally help provide quicker and more accurate diagnoses that can save doctors’ time and get patients the care they need much more quickly. “Brainwaves are central to neurology diagnosis,” Piramidal co-founder and CEO Dimitris Sakellariou, who holds a PhD in neuroscience, told VentureBeat. By automating analysis and enhancing understanding through large models, personalized treatment can be revolutionized and diseases can be predicted earlier in their progression, he noted. And, as wireless EEG sensors become more mainstream, models like Piramidal’s can enable the creation of personalized agents that “continuously measure and monitor brain health.” “These agents will offer real-time insights into how patients respond to new treatments and how their conditions may evolve,” said Sakellariou.


What is ‘model collapse’? An expert explains the rumours about an impending AI doom

There are hints developers are already having to work harder to source high-quality data. For instance, the documentation accompanying the GPT-4 release credited an unprecedented number of staff involved in the data-related parts of the project. We may also be running out of new human data. Some estimates say the pool of human-generated text data might be tapped out as soon as 2026. It’s likely why OpenAI and others are racing to shore up exclusive partnerships with industry behemoths such as Shutterstock, Associated Press and NewsCorp. They own large proprietary collections of human data that aren’t readily available on the public internet. However, the prospects of catastrophic model collapse might be overstated. Most research so far looks at cases where synthetic data replaces human data. In practice, human and AI data are likely to accumulate in parallel, which reduces the likelihood of collapse. The most likely future scenario will also see an ecosystem of somewhat diverse generative AI platforms being used to create and publish content, rather than one monolithic model. This also increases robustness against collapse.


Custodians looking to beat offenders in the GenAI cybersecurity battle

While the security, as well as the global technology community, seem united at somehow regulating this new-age technology, there are a limited number of things that can actually be done. “There are two ways to combat attacks enabled by the widespread use of GenAI,” Kashifuddin said. “For internal threats, it comes down to deploying ‘cyber for GenAI’. For external threats, the use of ‘GenAI for cyber’ defense is becoming more of a reality and evolving quickly.” The use of cyber for Gen AI threats simply means applying fundamental controls to protect company resources from a GenAI-based attack, he explained. “Traditional data protection tools like Data Loss Prevention (DLP), Cloud Access Security Broker (CASB) when used in conjunction with web proxies amplify a company’s ability to detect and restrict exfiltration of sensitive data to external GenAI services.” “GenAI for cyber” refers to a growing class of techniques using GenAI to combat GenAI-induced attacks. Apart from advanced phishing detection and automated incident response, this includes a bunch of new ways to better the model in order to neutralize adversarial activities. “The discipline of protecting AI systems is just beginning to evolve, but there are some interesting techniques for that already,” Barros said. 


New phishing method targets Android and iPhone users

ESET analysts discovered a series of phishing campaigns targeting mobile users that used three different URL delivery mechanisms. These mechanisms include automated voice calls, SMS messages, and social media malvertising. The voice call delivery is done via an automated call that warns the user about an out-of-date banking app and asks the user to select an option on the numerical keyboard. After the correct button is pressed, a phishing URL is sent via SMS, as was reported in a tweet. Initial delivery by SMS was performed by sending messages indiscriminately to Czech phone numbers. The message sent included a phishing link and text to socially engineer victims into visiting the link. The malicious campaign was spread via registered advertisements on Meta platforms like Instagram and Facebook. These ads included a call to action, like a limited offer for users who “download an update below.” After opening the URL delivered in the first stage, Android victims are presented with two distinct campaigns, either a high-quality phishing page imitating the official Google Play store page for the targeted banking application, or a copycat website for that application. From here, victims are asked to install a “new version” of the banking app.


The Cloud Talent Crisis: Skills Shortage Drives Up Costs, Risks

"Cloud complexity is growing by the day, and with it, the challenge of responding to security threats," he said. "Organizations need more skilled engineers to deal with attacks — or even notice them." He noted that phishing attacks, password leakage, and third-party attacks — the three biggest threats reported in this year's survey — are even more dangerous without skilled, well-resourced personnel. ... "For me, cloud waste is the biggest concern," he said. "It means more money goes where it shouldn't, less money is available to hire talented staff, and fewer resources are available to that staff." ... "This approach can result in a faster pace of innovation, better mapping of features with customer requirements, and additional cost savings opportunities," he explained. Some components of this strategy include working backwards from the customer, organizing teams around products, keeping development teams small, and reducing risk through iteration. "There is a clear correlation between a lack of skilled talent and a lack of cloud maturity," O'Neill said. "High-maturity organizations tend to establish cloud principles and then strictly adhere to them."


The critical imperative of data center physical security: Navigating compliance regulations

In an increasingly digital world where data is often considered the new currency, data centers serve as the fortresses that safeguard the invaluable assets of organizations. While we often associate data security with firewalls, encryption, and cyber threats, it's imperative not to overlook the significance of physical security within these data fortresses. By assessing risks associated with physical security, environmental factors, and access controls, data center operators can take proactive measures to mitigate said risks. These measures greatly aid data centers in preventing unauthorized access, which can lead to data theft, service disruptions, and financial losses. Additionally, failing to meet compliance regulations can result in severe legal consequences and damage to an organization's reputation. In a perfect world, simply implementing iron-clad physical barriers and adhering to compliance regulations would completely eliminate the risk of data breaches. Unfortunately, that’s simply not the case. Both data center security and compliance encompass not only both cybersecurity and physical security, but secure data sanitization and destruction as well. 



Quote for the day:

"Personal leadership is the process of keeping your vision and values before you and aligning your life to be congruent with them." -- Stephen R. Covey