Daily Tech Digest - May 12, 2024

The tone of the top should drive risk-aware cultures. Top management should ensure that strategies, business models, and processes are looked at collectively through the lens of risk management and that buy-in is accomplished across all levels. For this to happen, the one thing that organisations should consciously avoid is people working in silos. Risk management is not a standalone function but a collaborative effort to ensure the enterprise is risk-mature and resilient. Well-thought-out training programs with case studies of success and failures should be showcased for the entire team to understand risk exposures and impact. Training should be an ongoing process because what worked well historically may not continue to be relevant in the dynamic business environments that we are in. Adaptability and agility are key transformative cultures to be ensured by top management. While strategies and processes flow from the top, the bottom-up feedback loop is equally important to understand the practical aspects at the trenches of processes.


Synthetic ID Fraud Rises 98% in Auto Lending Industry

It is important to note that more than ever, data breaches are targeting insurance in healthcare and government, but the same data is being used in other industries. This emerging trend in auto fraud can be attributed to the appeal of high credit limits and the ease of securing online auto loans without having to visit dealerships in person. At the same time, the practice of credit washing by a few credit repair companies is prevalent. Credit washing involves systematically disputing all negative tradelines on a credit report not as reporting errors but as outright identity theft.  ... Not all fraudulent activities in the auto lending industry involve complex schemes such as synthetic identity fraud. Often, borrowers inflate their income or misrepresent their financial status to enhance their chances of securing a loan. Fraudsters also use shell companies to create false employment verifications. The report identifies over 11,000 fake companies circulating within the industry. Although seemingly harmless, 40% of loans secured with a fake employer result in charge-offs by borrowers who never intended to repay.


What is a digital twin and why is it important to IoT?

The terms simulation and digital twin are often used interchangeably, but they are different things. A simulation is designed with a CAD system or similar platform, and can be put through its simulated paces, but may not have a one-to-one analog with a real physical object. A digital twin, by contrast, is built out of input from IoT sensors on real equipment, which means it replicates a real-world system and changes with that system over time. ... Just as digital twins serve different purposes in different industries, the value of digital twins differs depending on the application. In the world of manufacturing, for example, a digital twin can enable product designers to try out prototypes before settling on a final design. It’s a way to use digital resources to develop and refine products instead of tapping physical engineering resources. With a digital replica of a product that simulates the real thing in a virtual space, designers can rapidly generate new iterations, optimize their product designs, and improve product quality along the way. In the semiconductor industry, digital twins can exist in the cloud and replace physical research models.


How does Artificial Intelligence Impact the Modernization of Legacy Applications?

Over the years of existence, legacy apps accumulate not only technical debt but also interest, which significantly complicates code optimization in the future: the more the application is used, the more updates it has, and the more technical debt it eventually accumulates. AI-powered assistance makes refactoring much easier, helping to identify code duplicating, extra memory, or other resource usage. To level up app performance AI can offer code quality enhancements, unit test case generation, or in some cases refactoring parts of monolithic code into composable. ... Most legacy applications cannot compete with modern ones due to complex, unclear, or confusing architecture that requires additional efforts for the latest integrations and maintenance. AI-powered analyzing tools can explore existing architecture, identify pitfalls and weaknesses, and suggest possible solutions. They can include moving to reliable and cost-efficient cloud-based storage, transit to microservices, or replacing outdated components. ...  Generative AI identifies bottlenecks and offers a proper solution to handle high workloads. There can be new configurations for load balancers and algorithms to optimize traffic distribution.


Cybersecurity in a Race to Unmask a New Wave of AI-Borne Deepfakes

Mandia warns that the next wave of AI-generated audio and video will be especially tough to detect as phony. "What if you have a 10-minute video and two milliseconds of it are fake? Is the technology ever going to exist that's so good to say, 'That's fake'? We're going to have the infamous arms race, and defense loses in an arms race." Cyberattacks overall have become more costly financially and reputation-wise for victim organizations, Mandia says, so it's time to flip the equation and make it riskier for the threat actors themselves by doubling down on sharing attribution intel and naming names. "We've actually gotten good at threat intelligence. But we're not good at the attribution of the threat intelligence," he says. The model of continuously putting the burden on organizations to build up their defenses is not working. "We're imposing cost on the wrong side of the hose," he says. Mandia believes it's time to revisit treaties with the safe harbors of cybercriminals and to double down on calling out the individuals behind the keyboard and sharing attribution data in attacks. 


Navigating the AI Revolution: Strategies for Success in 2024

As the AI landscape evolves, it becomes evident that there is no one-size-fits-all solution. Organizations will need to adopt a multimodel approach, incorporating a variety of models tailored to specific industries, domains, and use cases. Shawn suggests: "Don't get distracted by a particular LLM brand. Saying ChatGPT is better than Claude, and this one's better than Meta, and so on and so forth, depends on your use case. You're going to end up having multiple models in your environment to achieve different business goals. In addition, models will continue to evolve." ... The rapid advancement of AI has sparked global discussions about regulations, compliance, and ethics. To ensure compliance, familiarize yourself with the European Union AI Act, the National Institute of Standards and Technology (NIST) guidelines, and other relevant regulations. However, it is crucial to prioritize responsible AI practices beyond mere compliance. This involves addressing data privacy, security, human-AI collaboration, and transparency. 


Global alarm intensifies as state-sponsored cyberattacks raise risks

“One key factor has been the expansion of connected systems due to the IT/OT convergence, where organizations are having their OT cybersecurity roll under central IT structures. Another factor has been the wider adoption of remote access driven after COVID,” Harshal Haridas, chief architect for Honeywell OT Cybersecurity, told Industrial Cyber. “A lot of attacks involve malware that are often deployed via USB devices. State-sponsored hackers are also using AI to enable more of their capabilities in penetrating sensitive systems.” Bryce Livingston, a senior adversary hunter at Dragos, said that the perceived surge in cyberattacks can likely be attributed to several interconnected factors: elevated geopolitical tensions in multiple regions across the globe, in addition to continued growth in the global cybercriminal ecosystem, where we see specialized criminal economies of scale emerging. “This specialization has lowered the barrier to entry for engaging in cybercrime.” Additionally, Livingston pointed out that “we see the increasing use of cyberattacks by hacktivist personas to influence perceptions around certain events..." 


How to Build and Foster High-performing Software Teams

As a leader, you should establish channels for regular communication and collaboration between teams. This could involve project management tools, regular meetings, etc. For us, what works well are informal social events (small and big). Transparency builds trust and helps teams anticipate roadblocks or opportunities to collaborate. It is of course easier said than done, but defining clear, measurable goals for each team that contribute to the overall organizational objectives is a real stepping stone to successful leadership. If there is an opportunity, consider establishing a central coordination team or committee. For instance, a recognition committee that will be responsible for recognizing the achievements of the team members. The most effective strategy will depend on the specific teams, their work styles, and the overall culture. I always focus on empowering teams to achieve the desired outcomes, not micromanaging them. Finally, celebrate successes achieved through collaboration and install policies that reward such successes. This reinforces the value of teamwork and motivates further collaboration across your teams.


Chinese State-Backed Hackers Suspected in Third Party Breach Impacting UK Armed Forces

The full details of the exposed UK armed forces data have yet to be made public, but in addition to triggering an investigation the third party breach also prompted the Ministry of Defence to announce an “eight point plan” to identify security failings and prevent such incidents from happening again. The Ministry did indicate that there was “evidence of potential failings” at SSCL that the hackers took advantage of, but did not elaborate on whether that means an unpatched vulnerability or an employee falling for a phishing approach. It is also not yet clear what the Chinese government would want with UK armed forces payroll data. For the most part these APT groups stick to espionage and theft of beneficial corporate secrets, but generally stop short of taking money or running financial scams. Some of the Chinese APT groups are private sector contractors, however, and several have been observed targeting crypto or other funds seemingly as a side activity for their own benefit. Tom Lysemose Hansen, CTO of Promon, elaborates on what this stolen data might be used for: “Nothing and nobody is unhackable, that’s the lesson from this. 


AI within the data center

AI demands even greater computing capacity, higher-capacity data centers, and more energy than legacy applications, all of which lead to greater environmental impacts. Yet, amidst these challenges, there is hope. There is a strong and growing emphasis on standards and consumer preferences for companies that embrace sustainability. By uniting sustainability and the new high-density AI applications, we can pave the way for data center innovation with construction and operational breakthroughs. This approach embraces AI's higher demands and promotes sustainability, offering a promising path forward. Historically, sustainability efforts were a corporate nod to investors and a small faction of society that prioritized these ideas. As time has passed, sustainability has become a critical planning factor, integrating sustainability, finance, and business strategy. Part of the growing sustainability movement is due to impending rules and regulations; part is from societal pressure from those who put their dollars where their ecological priorities lie. Furthermore, part is corporate awareness that businesses must consider climate change in their business strategies, and part, we must believe, is genuinely altruistic.



Quote for the day:

''Smart leaders develop people who develop others, don't waste your time on those who won't help themselves.'' -- John C Maxwell

Daily Tech Digest - May 11, 2024

Democratizing software testing in the age of GenAI

To encourage the “shift-left” movement—which advocates for testing early and often—many test tool vendors are exploring Copilot-like methods for script-based test automation. They anticipate that developers will use these tools to generate early test scripts with GenAI assistance. This trend highlights how AI-assisted technologies can optimize workflows by automating routine tasks and suggesting improvements, perfectly aligning with the proactive shift-left approach. However, should we narrowly define GenAI-driven test automation as merely an extension of tools like Copilot for creating Selenium-like scripts? Such a view greatly underestimates the transformative impact of AI in quality assurance (QA) testing. To truly leverage GenAI’s capabilities, we must expand our perspective beyond developer-centric models. While integrating testing earlier in the development process is beneficial, GenAI’s real strength lies in democratizing testing, fulfilling its core promise by enabling a broader range of participants, including manual testers, to effectively use no-code test automation tools.


Composability to Jamstack: Drilling Down on Frontend Terms

Composable is a term used by Netlify frequently, and some developers see it as a marketing term that basically means “an enterprise version of Jamstack,” said Rinaldi. That’s not true, he said. “It’s really much more focused on the backend,” he said. “In fact, … it’s not even concerned with what kind of application you’re building on the frontend. You could have a composable architecture that talks to a mobile application, you could have it talk to a web application.” Whereas Jamstack was very focused on how developers build a website, composability takes a broader view — though it is more of a practice for large organizations, he added. “I have all these different APIs, now I need to create this whole kind of backend for frontend pattern where I might have a layer on the frontend that’s just trying to weave together all my backend APIs,” he said. “Now we need to get the customer data from the customer API to get the customer ID to then pass it to the orders API to get the orders. You’re weaving together this complex stuff, often coming from different systems and different APIs. And it was hard to pull all that together.”


Companies Without a Chief AI Officer are Bound to F-AI-L

“While the CTO is responsible for overseeing an organisation’s overall technology strategy and infrastructure, the CAIO’s primary responsibility is to identify opportunities for AI deployment, develop an AI strategy aligned with business goals, and oversee the execution of AI initiatives,” said Sachin S Panicker, Chief AI Officer, Fulcrum Digital Inc. Simply put, the CAIO oversees the development and implementation of AI projects across the company. This could involve collaborating with data scientists, engineers, and other technical teams. They might also manage partnerships with external AI vendors. ... It also becomes important to have a chief informational security officer, once the AI strategy is in place, who can guarantee the safety of generative AI tools within the organisation. The challenges posed by generative AI have become a significant headache for SaaS security teams.According to a recent Salesforce study, more than half of GenAI adopters use unapproved tools at work. The research found that despite GenAI’s benefits, a lack of clearly defined policies around its use may put businesses at risk. Most likely, CISO roles are also changing with generative AI.


Singapore updates cybersecurity law to expand regulatory oversight

The updates are meant to keep pace with developments in technology and business practices and extend the CSA's regulatory oversight to other entities and systems beyond physical assets. The amendments will enable the regulator to better respond to evolving cybersecurity challenges and operate on a risk-based approach in regulating entities, Puthucheary said. For instance, when the Cybersecurity Act was first established in 2018, it sought to regulate physical CIIs (critical information infrastructures). The minister noted that new technology and business models have since emerged, particularly with the advent of cloud computing. ... The updated legislation allows the government to make it clear the CII owner is responsible for the cybersecurity of its virtualized infrastructure, not third parties involved in the supply of the underlying physical infrastructure, he said. The Cybersecurity Act lists 11 CII sectors, which include water, health care, maritime, infocommunications, banking and finance, and aviation. The Act outlines a regulatory framework that formalizes the duties of CII providers in securing systems under their responsibility, including before and after a cybersecurity incident has occurred.


How data and tech are transforming L&D in NBFCs

Data is reshaping the digital economy and its relevance in L&D cannot be overstated. By leveraging data analytics, NBFCs can gain valuable insights into existing employee skill gaps, learning preferences and performance metrics. ... From immersive virtual classrooms to mobile learning apps, technology has evolved and made the impossible to possible. By embracing innovative learning technologies, NBFCs can deliver personalised and on-demand training experiences that will empower employees to learn and grow as professionals. Furthermore, advent of artificial intelligence and machine learning have boosted the efficiency of L&D programmes by providing personalised recommendations, adaptive assessments, and real-time feedback. ... The success of Learning & Development (L&D) programmes now hinges critically on integrating cutting-edge technology to foster a culture of continuous learning and development. Leveraging data-driven insights and embracing advanced technologies, HR professionals can cultivate a growth mindset among employees, encouraging them to embrace new challenges and opportunities.


Best Practices for Surviving a Cyber Breach

When hit with a cyber breach, the first thing you do is look at the incident response plan. "If you're discussing when you're in the middle of a breach, 'Should we call the FBI or not? Should we do that?' That's a problem," Powers said. "That's something you should already have planned for and had discussions. … When you're thinking instant response, you're thinking the plan first." Pasteris added that it is vital to know what your assets are, as things fall through the cracks. Not only should you know what applications you use, but how you are protecting those applications. "A lot of organizations don't keep track of their assets," he said. "How are they protected, how they do defense in depth around those apps." ... A big question, according to Jay Martin, security practice lead at Blue Mantis, is if and when you should call the FBI after a cyber breach, as a lot of companies worry about getting on the FBI's radar. "Do we call the FBI, not call the FBI?" he asked. "And what are they going to do for us when we call them?" There are advantages to calling the FBI, said Joe Bonavolonta, managing partner at global risk and intelligence advisory firm Sentinel, who served more than 27 years with the FBI, including a stint as head of the FBI counterintelligence program. 


A Career in Cyber Security: Navigating the Path to a Digital Safekeeping Profession

Cyber security represents not just a robust field teeming with opportunities but also an increasingly pivotal aspect of global digital infrastructure. With the prevalence of digital threats, your expertise in this domain can lead to a rewarding and socially significant cyber security career. Employers across various sectors seek professionals who can protect their data and systems, offering a broad market for your skills. Your career in cyber security could take many forms, from positions like analysts and engineers to managerial and senior leadership roles. Understanding the array of roles you could undertake is crucial, and specialising in a particular area can not only sharpen your skills but also elevate your value in this dynamic industry. Whether you're just embarking on your professional journey or looking to upskill, a career in cyber security presents a sustainable pathway with myriad professional opportunities. Staying informed about the latest trends, requirements, and certifications, such as the Cybersecurity Maturity Model Certification (CMMC) 2.0, can enhance your employability and trust within the defence sector, for example. 


Cisco reimagines cybersecurity at RSAC 2024 with AI and kernel-level visibility

“There’s overconfidence in the ability to handle cyber-attacks, with 80% of companies feeling confident in their readiness, but only 3% are truly prepared. The downside effects of not being resilient are tragic. We must shift to creating a first generation of something completely new,” Jeetu Patel, executive vice president and general manager of Security and Collaboration for Cisco, told VentureBeat citing findings from the 2024 Cisco Cybersecurity Readiness Index. ... “There are three key technological shifts that are occurring, which are going to fundamentally change how we solve these problems. The first is AI, the second is kernel-level visibility, and the third is hardware acceleration,” Patel said. Patel says these three technological shifts form the foundation of Cisco’s new generation of cybersecurity hyper-distributed frameworks, starting with HyperShield. Patel and Gillis explained the technological shifts and their implications on why and how cybersecurity needs to be reimagined.


Managing Technical Debt: Strategies for Balancing Speed and Quality in Development Projects

When speed takes precedence over quality, the accumulation of technical debt becomes a significant challenge. Technical debt refers to the consequences of taking shortcuts or compromising code quality to meet deadlines or achieve quick results. It includes inefficient code, outdated libraries, inadequate documentation, and other technical shortcomings that accumulate over time. Just like financial debt, technical debt must be paid off eventually in the form of ongoing maintenance, bug fixing, and refactoring. Striking the right balance between speed and quality ensures the delivery of software that meets both immediate and long-term goals. It enables developers to build code that is efficient, scalable, and maintainable, while also allowing for timely delivery and competitive advantage. Finding this optimal balance requires a combination of effective project management, proper resource allocation, and adherence to coding best practices. By prioritizing quality without sacrificing speed, development teams can create a solid foundation that allows for ongoing enhancements and future flexibility.


Architecting Resilience: Multi-Cloud Strategies for Enhanced Business Continuity

A multi-cloud architecture involves using two or more cloud computing services from different providers, including any combination of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS). The goal is to eliminate reliance on a single vendor, optimize services and capabilities, and improve contingency planning. ... Despite its strategic benefits, the transition to a multi-cloud strategy is not without challenges: Technical Complexity: Managing multiple platforms can be complex and requires new skill sets. Solution: Invest in skilled cloud architects who understand the intricacies of different cloud environments. Utilize comprehensive management tools that provide a unified view of all cloud resources, simplifying resource allocation and monitoring. Cultural Resistance: Changes in IT infrastructure can meet with internal resistance due to unfamiliarity with new systems. Solution: Engage all stakeholders early in the planning process, including IT teams and business units. Provide training and continuous support to ease the transition, demonstrating how multi-cloud strategies align with broader business objectives.



Quote for the day:

''One advantage of talking to yourself is that you know at least somebody's listening.'' -- Franklin P. Jones

Daily Tech Digest - May 10, 2024

Optimize AI at Scale With Platform Engineering for MLOps

Just as platform engineering emerged from the DevOps movement to streamline app development workflows, so too must platform engineering streamline the workflows of MLOps. To achieve this, one must first recognize the fundamental differences between DevOps and MLOps. Only then can one produce an effective platform engineering solution for ML engineers. To enable AI at scale, enterprises must commit to developing, deploying and maintaining platform engineering solutions that are purpose-built for MLOps. Whether due to data governance requirements or practical concerns about moving vast volumes of data over significant geographical distances, MLOps at scale require enterprises to utilize a spoke-and-wheel approach. Model development and training occurs centrally, trained models are distributed to edge locations for fine-tuning on local data, and fined-tuned models are deployed close to where end users interact with them and the AI applications they leverage. ... Enterprises should hire engineers with MLOps experience to fill platform engineering roles appropriately. According to research from the World Economic Forum, AI is projected to create around 97 million new jobs by 2025. 


The Blockchain Integrity Act: Latest Attempt to Restrict Financial Privacy

In short, the Blockchain Integrity Act would first establish a two‐​year moratorium that prohibits financial institutions from going anywhere near cryptocurrency that has been routed through a mixer. With that two‐​year moratorium in place, the Blockchain Integrity Act would then require the Department of the Treasury to study how people use mixers and other privacy‐​enhancing technology. ... The second half of the legislation—the request for a study—is less concerning if it’s considered alone and without the surrounding context. The request seeks information regarding different types of privacy‐​enhancing technology, illicit and legitimate use history, and an analysis of what the government’s role might be here. Those are all reasonable inquiries. Again, without additional context, it’s an encouraging sign that Representative Casten is interested in learning more about how this technology is used for both better and worse. Yet what isn’t encouraging is that Representative Casten introduced the bill saying that “until we’ve studied [privacy enhancing technologies like mixers] and have a good audit trail, the presumption should be that these are money laundering channels.”


Some strategies for CISOs freaked out by the specter of federal indictments

“Some CISOs feel like they’re the frog that’s in the water that’s starting to boil, and they don’t like that feeling, and they want to make sure that they’re doing the right things to navigate that heat,” Sullivan said during a panel discussion, “CISOs Under Indictment: Case Studies, Lessons Learned, and What’s Next,” at this year’s RSA Conference. The panel of current and former CISOs emphasized that in this environment, CISOs need to document their roles and responsibilities, involve the right people in incident response and decision-making processes, and have the courage to stand up for their convictions to minimize the risk that they will face the same fates as Sullivan and Brown. ... “The heat is up because the reality is you’ve got these entities in government who are responding to a huge rise in cybercrime in a way that no one can hide. It’s not like in the old days when if an incident happened, most people wouldn’t notice when stuff happens. Today, the whole world notices,” he said. Blauner’s bottom-line advice to CISOs to protect themselves is to “take a look at every governance document you’ve got and really make sure that it’s crystal clear about roles and responsibilities, especially around who makes risk management decisions.”


Wearable devices can now harvest our brain data. Australia needs urgent privacy reforms

In a background paper published earlier this year, the Australian Human Rights Commission identified several risks to human rights that neurotechnology may pose, including rights to privacy and non-discrimination. Legal scholars, policymakers, lawmakers and the public need to pay serious attention to the issue. The extent to which tech companies can harvest cognitive and neural data is particularly concerning when that data comes from children. This is because children fall outside of the protection provided by Australia’s privacy legislation, as it doesn’t specify an age when a person can make their own privacy decisions. The government and relevant industry associations should conduct a candid inquiry to investigate the extent to which neurotechnology companies collect and retain this data from children in Australia. The private data collected through such devices is also increasingly fed into AI algorithms, raising additional concerns. These algorithms rely on machine learning, which can manipulate datasets in ways unlikely to align with any consent given by a user.


Cloud environments beyond the Big Three

The resurgence and innovation in edge computing and on-premises technology further support the trend toward diversification as data generation and consumption locations continue to spread geographically. ... Edge computing addresses these limitations by processing data closer to where it is generated. This drastically reduces latency and enhances the user experience in applications such as IoT, retail tech, and smart manufacturing. Although many consider edge computing to be small devices, it also includes entire data centers and smaller server installations that exist to serve a specific business location. Many enterprises don’t see the wisdom of sending their data on a 2,000-mile round trip to the point of presence for a public cloud provider, which happens more often than we understand. Additionally, although the cloud offers good scalability and flexibility, concerns over data sovereignty and security continue to push certain industries towards on-premises solutions. Sensitive data and critical applications in sectors such as finance, government, and healthcare often necessitate keeping data in-house under strict regulatory frameworks.


Controlling chaos using edge computing hardware: Digital twin models promise advances in computing

Using machine learning tools to create a digital twin (a virtual copy) of an electronic circuit that exhibits chaotic behavior, researchers found that they were successful at predicting how it would behave and at using that information to control it. Many everyday devices, like thermostats and cruise control, utilize linear controllers—which use simple rules to direct a system to a desired value. Thermostats, for example, employ such rules to determine how much to heat or cool a space based on the difference between the current and desired temperatures. Yet because of how straightforward these algorithms are, they struggle to control systems that display complex behavior, like chaos. As a result, advanced devices like self-driving cars and aircraft often rely on machine learning-based controllers, which use intricate networks to learn the optimal control algorithm needed to operate efficiently. However, these algorithms have significant drawbacks, the most demanding of which is that they can be extremely challenging and computationally expensive to implement.


Digital recreations of dead people need urgent regulation, AI ethicists say

Such services, which are already technically possible to create and legally permissible, could let users upload their conversations with dead relatives to “bring grandma back to life” in the form of a chatbot, researchers from the University of Cambridge suggest. They may be marketed at parents with terminal diseases who want to leave something behind for their child to interact with, or simply sold to still-healthy people who want to catalogue their entire life and create an interactive legacy. But in each case, unscrupulous companies and thoughtless business practices could cause lasting psychological harm and fundamentally disrespect the rights of the deceased, the paper argues. “Rapid advancements in generative AI mean that nearly anyone with internet access and some basic knowhow can revive a deceased loved one,” said Dr Katarzyna Nowaczyk-BasiÅ„ska, one of the study’s co-authors at Cambridge’s Leverhulme centre for the future of intelligence (LCFI). “This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.”


How To Take The A-I-M Approach To Leadership

I like to break down the concept of taking aim into three components, which I call the A-I-M approach: appreciation, imagination and motivation. The common thread across all three of these principles is communication—and leaders cannot be effective without it. Showing genuine gratitude is a foundational aspect of effective leadership. Expressing heartfelt encouragement demonstrates empathy and humility. And this simple show of appreciation directly benefits the organization by motivating employees to continue contributing to the company’s success and nurturing their loyalty. ... A leader’s job is not to be the author of all ideas but to inspire team members to tap into their imaginations and present fresh approaches to solving problems, delivering solutions and communicating with clients.  ... One of the responsibilities of a leader is to understand what moves their teams into action. As author and leadership coach John Maxwell famously wrote, “A leader is great not because of his or her power, but because of his or her ability to empower others.” I call that motivation.


Colorado AI legislation further complicates compliance equation

CIOs might struggle with the bill’s language because the focus is on whether AI — in any form — helps make “consequential decisions” that could impact Colorado residents. The bill defines consequential decision as being any decision “that has a material legal or similarly significant effect on the provision or denial to any consumer,” which includes educational enrollment, employment or employment opportunity, financial or lending service, healthcare services, housing, insurance, or a legal service. ... Another provision could prove onerous for CIOs who do not have full knowledge of every AI implementation in use in their environment, as it requires companies to make “a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys, how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems and the nature, source, and extent of the information collected and used.” ... One especially dicey area in the legislation that should concern CIOs is when AI — especially generative AI — acts on its own. 


AI's Game-Changing Role in Finance and Audit Processes

Auditors can face several risks when using AI. These risks include over-reliance on AI-generated insights, potential biases and quality issues from incomplete or poor-quality data andcybersecurity threats such as consequences in terms of hacking of the confidential data from the AI websites. Thus, it is necessary to ensure compliance and implement safeguarding measures. Following are some of the possible measures that can be implemented to mitigate the above-mentioned risks. Human judgement: While AI is a great tool to be incorporated in the professional world to help auditors and organisations streamline their existing processes, AI work on standard algorithm that can’t be customised on case-to-case basis. Therefore, to ensure the accuracy of the results, a human review can be placed in practice to review from and validate the accuracy of output results. Updating back-end algorithms: The better the algorithms, the better the results. Regular updates to the back-end algorithms can yield more accurate and improved outputs, adapting to changing scenarios and data formats, ultimately mitigating the risk of incorrect or inaccurate results..



Quote for the day:

"Don't find fault, find a remedy." -- Henry Ford

Daily Tech Digest - May 09, 2024

Red Hat delivers accessible, open source Generative AI innovation with Red Hat Enterprise Linux AI

RHEL AI builds on this open approach to AI innovation, incorporating an enterprise-ready version of the InstructLab project and the Granite language and code models along with the world’s leading enterprise Linux platform to simplify deployment across a hybrid infrastructure environment. This creates a foundation model platform for bringing open source-licensed GenAI models into the enterprise. RHEL AI includes:Open source-licensed Granite language and code models that are supported and indemnified by Red Hat. A supported, lifecycled distribution of InstructLab that provides a scalable, cost-effective solution for enhancing LLM capabilities and making knowledge and skills contributions accessible to a much wider range of users. Optimised bootable model runtime instances with Granite models and InstructLab tooling packages as bootable RHEL images via RHEL image mode, including optimised Pytorch runtime libraries and accelerators for AMD Instinct™ MI300X, Intel and NVIDIA GPUs and NeMo frameworks. 


Regulators are coming for IoT device security

Up to now, the IoT industry has relied mainly on security by obscurity and the results have been predictable: one embarrassing compromise after another. IoT devices find themselves recruited into botnets, connected locks get trivially unlocked, and cars can get remotely shut down while barreling down the highway at 70mph. Even Apple, who may have the most sophisticated hardware security team on the planet, has faced some truly terrible security vulnerabilities. Regulators have taken note, and they are taking action. In September 2022, NIST fired a warning shot by publishing a technical report that surveyed the state of IoT security and made a series of recommendations. This was followed by a voluntary regulatory scheme—the Cyber Trust Mark, published by the FCC in the US—as well as a draft regulation of European Union’s upcoming Cyber Resilience Act (CRA). Set to begin rolling out in 2025, the CRA will create new cybersecurity requirements to sell a device in the single market. Standard bodies have not stayed idle.The Connectivity Standards Alliance published the IoT Device Security Specification in March of this year, after more than a year of work by its Product Security Working Group.


Australia revolutionises data management challenges

In Australia, the importance of data literacy is growing rapidly. It is now more essential than ever to be able to comprehend and effectively communicate data as valuable information. The significance of data literacy cannot be overemphasised. Highlighting the importance of data literacy across government agencies is key to unlocking the true power of data. Understanding which data to use for problem-solving, employing critical thinking to comprehend and tackle data strengths and limitations, strategically utilising data to shape policies and implement effective programmes, regulations, and services, and leveraging data to craft a captivating narrative are all essential components of this process. Nevertheless, the ongoing challenge lies in ensuring that employees have the ability to interpret and utilise data effectively. Individuals who are inexperienced with data may find it challenging to effectively work with data, comprehend intricate datasets, analyse patterns, and extract valuable insights. Organisations are placing a strong emphasis on data literacy initiatives, aiming to turn individuals with limited data knowledge into experts in the field. 


Navigating Architectural Change: Overcoming Drift and Erosion in Software Systems

Architectural drift involves the introduction of design decisions that were not part of the original architectural plan, yet these decisions do not necessarily contravene the foundational architecture. In contrast, architectural erosion occurs when new design considerations are introduced that directly conflict with or undermine the system's intended architecture, effectively violating its guiding principles. ... In software engineering terms, a system may start with a clean architecture but, due to architectural drift, evolve into a complex tangle of multiple architectural paradigms, inconsistent coding practices, redundant components, and dependencies. On the other hand, architectural erosion could be likened to making alterations or additions that compromise the structural integrity of the house. For instance, deciding to remove or alter key structural elements, such as knocking down a load-bearing wall to create an open-plan layout without proper support, or adding an extra floor without considering the load-bearing capacity of the original walls.


Strong CIO-CISO relations fuel success at Ally

We identify the value we are creating and capturing before we kick off a technology project, and it’s a joint conversation with the business. I don’t think it’s just the business responsibility to say my customer acquisition is going to go up, or my revenue is going to go up by X. There is a technology component to it, which is extremely critical, especially as a full-scale digital-only organization. What does it take for you to build the capability? How long will it take? How much does it cost and what does it cost to run it? ... Building a strong leadership team is critical. Empowering them is even more critical. When people talk about empowerment, they think it means I leave my leaders alone and they go do whatever they want. It’s actually the opposite. We have sensitive and conflict-filled conversations, and the intent of that is to make each other better. If I don’t understand how my leaders are executing, I won’t be able to connect the dots. It is not questioning what they’re doing; it’s asking questions for my learning so I can connect and share learnings from what other leaders are doing. That’s what leads us to preserving that culture.


To defend against disruption, build a thriving workforce

To build a thriving workplace, leaders must reimagine work, the workplace, and the worker. That means shifting away from viewing employees as cogs who hit their deliverables then turn back into real human beings after the day is done. Employees are now more like elite artists or athletes who are inspired to produce at the highest levels but need adequate time to recharge and recover. The outcome is exceptional; the path to getting there is unique. ... Thriving is more than being happy at work or the opposite of being burned out. Rather, one of the cornerstones of thriving is the idea of positive functioning: a holistic way of being, in which people find a purposeful equilibrium between their physical, mental, social, and spiritual health. Thriving is a state that applies across talent categories, from educators and healthcare specialists to data engineers and retail associates. ... In this workplace, people at every level are capable of being potential thought leaders who have influence through the right training, support, and guidance. They don’t have to be just “doers” who simply implement what others tell them to. 


Tips for Controlling the Costs of Security Tools

The total amount that a business spends on security tools can vary widely depending on factors like which types of tools it deploys, the number of users or systems the tools support and the pricing plans of tool vendors. But on the whole, it’s fair to say that tool expenditures are a significant component of most business budgets. Moody’s found, for example, that companies devote about 8% of their total budget to security. That figure includes personnel costs as well as tool costs, but it provides a sense of just how high security spending tends to be relative to overall business expenses. These costs are likely only to grow. IDC believes that total security budgets will increase by more than a third over the next few years, due in part to rising tool costs. This means that finding ways to rein in spending on security tools is important not just for reducing overall costs today, but also preventing cost overruns in the future. Of course, reducing spending can’t amount simply to abandoning critical tools or turning off important features.


UK Regulator Tells Platforms to 'Tame Toxic Algorithms'

The Office of Communications, better known as Ofcom, on Wednesday urged online intermediaries, which include end-to-end encrypted platforms such as WhatsApp, to "tame toxic algorithms." Ensuring recommender systems "do not operate to harm children" is a measure the regulator made in a draft proposal for regulations enacting the Online Safety Act, legislation the Conservative government approved in 2023 that is intended to limit children's exposure to damaging online content. The law empowers the regulator to order online intermediaries to identify and restrict pornographic or self-harm content. It also imposes criminal prosecution for those whose send harmful or threatening communications. Instagram, YouTube, Google and Facebook that are among 100,000 web services that come under the scope of the regulation and are likely to be affected by the new requirements. "Any service which operates a recommender system and is at higher risk of harmful content should identify who their child users are and configure their algorithms to filter out the most harmful content from children's feeds and reduce the visibility of other harmful content," Ofcom said.


Businesses lack AI strategy despite employee interest — Microsoft survey

“While leaders agree using AI is a business imperative, and many say they won’t even hire someone without AI skills, they also believe that their companies lack a vision and plan to implement AI broadly; they’re stuck in AI inertia,” Colette Stallbaumer, general manager of Copilot and Cofounder of Work Lab at Microsoft, said in a pre-recorded briefing. “We’ve come to the hard part of any tech disruption, moving from experimentation to business transformation,” Stallbaumer said. While there’s clear interest in AI’s potential, many businesses are proceeding with caution with major deployments, say analysts. “Most organizations are interested in testing and deployment, but they are unsure where and how to get the most return,” said Carolina Milanesi, president and principal analyst at Creative Strategies. Security is among the biggest concerns, said Milanesi, “and until that is figured out, it is easier for organizations to shut access down.” As companies start to deploy AI, IT teams face significant demands, said Josh Bersin, founder and CEO of The Josh Bersin Company. 


Mayorkas, Easterly at RSAC Talk AI, Security, and Digital Defense

While acknowledging the increasingly ubiquitous use of AI in many services across the nation, Mayorkas commented about the advisory board’s conversation of leveraging that technology in cybersecurity. “It’s a very interesting discussion on what the definition of ‘safe’ is,” he said. “For example, most people now when they speak of the civil rights, civil liberties implications, categorize that under the responsible use of AI, but what we heard yesterday was an articulation of the fact that the civil liberties, civil rights implications of AI really are part and parcel of safety.” ... Technologies are shipped in ways that create risk, vulnerabilities, and they are configured and deployed in ways that are incredibly complex. “It’s eerily reminiscent of William Gibson's ‘Neuromancer,’” Krebs said. “When he talks about cyberspace, he said ‘the unthinkable complexity,’ and that’s what it's like right now to deploy and manage a large enterprise.” “We are just not sitting in place or standing in place because new technology for emerging on a regular basis,” he said. 



Quote for the day:

"Successful people do what unsuccessful people are not willing to do. Don't wish it were easier; wish you were better." -- Jim Rohn

Daily Tech Digest - May 08, 2024

The Important Difference Between Generative AI And AGI

Here are the key differences: Capability: Generative AI excels at replication and is adept at producing content based on learned patterns and datasets. It can generate impressive results within its specific scope but doesn't venture beyond its programming. AGI, on the other hand, aims to be a powerhouse of innovation, capable of understanding and creatively solving problems across various fields, much like a human would. Understanding: Generative AI operates without any real comprehension of its output; it uses statistical models and algorithms to predict and generate results based on previous data. AGI, by contrast, would need to develop a genuine understanding of the world around it, making connections and having insights that are currently beyond the reach of any AI system. Application: Today, Generative AI is widely used across industries to enhance human productivity and foster creativity, performing tasks ranging from simple data processing to complex content creation. AGI, however, remains a conceptual goal. 


Top strategies for ensuring data center reliability and uptime in 2024

Robust security measures constitute another cornerstone of data center reliability, safeguarding against both cyber threats and physical intrusions. Cybersecurity protocols should encompass multifaceted defense strategies, including perimeter security, network segmentation, encryption, and intrusion detection systems. Regular vulnerability assessments and penetration testing help identify and remediate potential weaknesses before they can be exploited by malicious actors. Physical security measures, such as access controls, surveillance systems, and environmental monitoring, bolster protection against unauthorized access and environmental hazards. Additionally, robust disaster recovery and business continuity plans should be in place to ensure swift recovery in the event of a security breach or natural disaster. Automation and orchestration technologies offer further avenues for enhancing data center reliability by streamlining operations and reducing the risk of human errors.


Reassessing Agile Software Development: Is It Dead or Can It Be Revived?

Why, exactly, do so many folks seem to dislike — and in some cases loathe — agile software development? There's no simple answer, but common themes include:Lack of specificity: A lot of complaints about agile emphasize that the concept is too high-level. As a result, actually implementing agile practices can be confusing because it's rarely clear exactly how to put agile into practice. Plus, the practices tend to vary significantly from one organization to another. Unrealistic expectations: Some agile critics suggest that the concept leads to unrealistic expectations — particularly from managers who think that as long as a development team embraces agile, it should be able to release features very quickly. In reality, even the best-planned agile practices can't guarantee that software projects will always stick to intended timelines. Misuse of the term "agile": In some cases, developers complain that team leads or managers slap the "agile" label on software projects even though few practices actually align with the agile concept. In other words, the term has ended up being used very broadly, in ways that make it hard to define what even counts as agile.


An Architect’s Competing Narratives

The biggest objection to basing architecture on the traditional EA narrative is that it is not transformation focused. The ivory-tower analogy has stuck permanently in this space. The other architects in a practice are often quite vocal in their difficulty with top-down control concepts that are necessary in the enterprise architect mindset. There are not enough of them to cover all the places they need to be. Their skills atrophy and theirfore they lose the ability to critic others work. They often feel connected to their scope as if it is seniority or authority. That connection with ‘Enterprise’ or ‘Domain’ sometimes causes conflict especially if they have not personally delivered a solution or outcome in a long time. In addition the scope based titles seem to interact poorly with other leadership roles both in IT and business as there is not clear ownership. Another type of challenge has emerged in the last ten to fifteen years. You can think of this as the ‘pure EA’ or ‘whole EA’ challenge. Effectively a group of practitioners and writers are regularly pointing to technology skilled EAs and calling them IT EAs and using that to minimize the value of those practitioners.


Exploring generative AI's impact on ethics, data privacy and collaboration

Implementing GenAI presents organizations with multifaceted challenges, particularly data privacy and security. The accuracy of GenAI outputs and the responsibility of organizations and employees to ensure that the outputs are representative and accurate – are also significant challenges. Governance, transparency, and the presence of unexpected biases are additional hurdles. Concerns range from accidentally breaching intellectual property and copyright by sharing data in an unlicensed or unvetted tool to the potential for privacy breaches and cybersecurity threats that GenAI can exacerbate. This data may contain private information about people, sensitive business use cases, or health care data. Unauthorized access to or inappropriate disclosure of these types of data can cause harm to individuals or organizations. While privacy and security were previously associated with intellectual property (IP) and cybersecurity, the definition and scope have expanded in recent years.to encompass data access management, data localization and the rights of data subjects.


AI chip shortages continue, but there may be an end in sight

The breakneck pace of AI adoption over the past two years has strained the industry’s ability to supply the special high-performance chips needed to run the process-intensive operations of genAI and AI in general. Most of the focus on processor shortages has been on the exploding demand for Nvidia GPUs and alternatives from various chip designers such as AMD, Intel, and the hyperscale datacenter operators, according to Benjamin Lee ... Nvidia is tackling the GPU supply shortage by increasing its CoWoS and HBM production capacities, according to TrendForce. “This proactive approach is expected to cut the current average delivery time of 40 weeks in half by the second quarter [of 2024], as new capacities start to come online,” TrendForce report said in its report. ... On the software side of the equation, LLM creators are also developing smaller models tailored for specific tasks; they require fewer processing resources and rely on local, proprietary data — unlike the massive, amorphous algorithms that boast hundreds of billions or even more than a trillion parameters.


CDOs’ biggest problem? Getting colleagues to understand their role

One reason the role may be misunderstood, the report says, is because it’s relatively new. The CDO position first gained momentum around 2008, to ensure data quality and transparency to comply with regulations following the housing credit crisis of that era. The CDO role also lacks a standard list of responsibilities, potentially adding to the confusion, note the report’s authors Thomas H. Davenport, Randy Bean, and Richard Wang. One possible definition of the CDO is the organization’s leader responsible for data governance and use, including data analysis, mining, and processing. In many cases, CDOs focus on business objectives, but in other cases, they have equal business and technology remits, according to the authors. ... “The role runs the gamut from being a very traditional IT-focused role that is oriented to the management of data, to one that resides in the business and is focused on the application of data to create value,” he says. “I anticipate that we will see the role solidify over the coming years, with a bias to value creation.” 


Open Source Is at a Crossroads

Struggles in open source communities undoubtedly stem from the greater economic climate. The start of the current decade saw a low interest rate environment, which Lorenc credits as ushering in a massive boom in the number of open source companies and projects. But now, we are experiencing significant realignment. “Time and money are even more scarce, making it harder for contributors or companies to allocate resources,” he said. “Many, but not all, open source businesses are at a crossroads,” said Fermyon CEO Matt Butcher. For ages, the theory was that you built an open source tool, established a community and then figured out how to monetize it. But now, the companies in that final stage are under immense pressure to increase profit, he said. “For some, that means abandoning the open source model.” A lack of resources to justify open source may also stem from a “plethora of riches” problem, explains Chris Aniszczyk, the chief technology officer of CNCF. With so many projects vying for attention, it’s easier than ever for innovative projects to lose out on the resources they require. 


Can NIS2 and DORA improve firms’ cybersecurity

One of the biggest issues with both NIS2 and DORA is the fact that they overly focus on promoting security and resilience without providing end-users with a blueprint for success. In paying too much attention to the outcomes that enterprises should be working towards, they fail to offer clear step-by-step guidance on the actions that businesses should take to reach those end goals. This is in part due to a recognition that every business is different. With each individual organisation having a better understanding of its own unique digital footprints, the belief is that it makes more sense for enterprises to interpret the guidelines in a way that makes sense for them. This is very much the case with DORA, where enterprises shoulder the responsibility of not only defining what qualifies as a business-critical service but also pinpointing its interconnected dependencies. Unfortunately, allowing regulations to remain open to interpretation in this manner can lead to confusion and inconsistencies, creating complexity to the environment for both organisations and auditors.


Trusted Data Access and Sharing — Why Automation Is the Key to Achieving Value from Data Democratization

As organizations endeavor to democratize data access and empower individuals across the enterprise, the last mile of data delivery emerges as the critical phase in the journey. This final stretch represents an opportunity to share data and data products responsibly, helping to ensure that insights reach the right users when needed and with appropriate context. However, achieving individualized data access poses significant challenges and concerns that need addressing to realize the potential of data democratization. The last mile of data delivery is where organizations must contend with a wide range of variables, including specific user requirements, use cases, rules, and contextual nuances that inform policies for granting conditional entitlements to access the data responsibly.  ... The controls on sharing data need to be as granular as possible about who is requesting access and under what conditions to justify the data types for provisioning. However, the traditional manual approach to last-mile data delivery that requires negotiating with data consumers to understand their needs is a significant roadblock to democratization.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry

Daily Tech Digest - May 07, 2024

How generative AI is redefining data analytics 

When applied to analytics, generative AI:Streamlines the foundational data stages of ELT: Predictive algorithms are applied to optimize data extraction, intelligently organize data during loading, and transform data with automated schema recognition and normalization techniques. Accelerates data preparation through enrichment and data quality: AI algorithms predict and fill in missing values, identify and integrate external data sources to enrich the data set, while advanced pattern recognition and anomaly detection ensure data accuracy and consistency. Enhances analysis of data, such as geospatial and autoML: Mapping and spatial analysis through AI-generated models enable accurate interpretation of geographical data, while automated selection, tuning, and validation of machine learning models increase the efficiency and accuracy of predictive analytics. Elevates the final stage of analytics, reporting: Custom, generative AI-powered applications provide interactive data visualizations and analytics tailored to specific business needs. 


Open-source or closed-source AI? Data quality and adaptability matter more

Licensing and usage terms of services matter in that they dictate how you use a particular model — and even what you use it for. Even so, getting caught up in the closed vs. open zealotry is shortsighted at a time when 70% of CEOs surveyed expect gen AI to significantly alter the way their companies create, deliver and capture value over the next three years, according to PwC. Rather, you should focus on the quality of your data. After all, data will be your competitive differentiator — not the model. ... Experimenting with different model types and sizes to suit your use cases is a critical part of the trial-and-error process. Right-sizing, or deploying the most appropriate model sizes for your business, is more crucial. Do you require a broad, boil-the-ocean approach that spans as much data as possible to build a digital assistant with encyclopedic knowledge? A large LLM cultivating hundreds of billions of data points may work well. ... Of course, the gen AI model landscape is ever evolving. Future models will look and function differently than those of today. Regardless of your choices, with the right partner you can turn your data ocean into a wellspring of insights.


Tips for Building a Platform Engineering Discipline That Lasts

A great platform engineer is defined both by their ability to create infrastructure and advocate for and guide others (which is where communication skills come in) — especially in the platforms that are maturing today. As far as hard skills go, the platform engineer should have experience in cloud platforms, CI/CD, IaC, security, and automation. Other roles you’ll need include a product owner to manage platform stakeholders and track KPIs. Our 2024 State of DevOps report found that 70% of respondents said a product manager was important to the platform team – 52% of whom called the role “critical”. To avoid complexity and scaling issues, you’ll also need architects with the vision and skills to help the platform engineering team design and build it. Infrastructure as code (IaC) is version control for your infrastructure. It makes infrastructure human-readable, auditable, repeatable, scalable, and securable. IaC also lets disparate teams — developers, operations, and QA — review, collaborate, iterate, and maintain infrastructure code simultaneously. 


What Is the American Privacy Rights Act, and Who Supports It?

The APRA ostensibly is about data, but AI is also covered a bit. Companies must evaluate their “covered algorithms” before deploying it and provide that evaluation to the FTC and the public. Companies must also adhere to people’s request to opt out of the use of any algorithm related to housing, employment, education, health care, insurance, credit, or access to places of public accommodation. The APRA would be enforced by a new bureau operating under the Federal Trade Commission (FTC). State attorneys general would also be able to enforce the new law. It would also allow individuals to file private lawsuits against companies that violate the law. There are several important exceptions in the APRA. For instance, small businesses, defined as having less than $40 million in annual revenue or collecting data on 200,000 or fewer individuals (as long as they’re not in the data-selling business themselves), are exempt from the APRA’s requirements. Governmental agencies and organizations working for them are also exempt, in addition to non-profit organizations whose main purpose is fraud-fighting, as well. 


Empowering Users: Embracing Product-Centric Strategies in SaaS

A non-negotiable requirement for a SaaS product to succeed with a product-centric strategy is for it to be intuitively designed with minimal friction and a focus on delivering value as quickly as possible. This is not a set-and-forget task demanding a profound understanding of the critical user journey and ruthlessly prioritizing friction and pain point elimination instead of just plastering feature promotions through in- and out-of-product interventions. However, this cannot be done if teams don’t use data analytics or prioritize the voice of the customer through feedback loops to further product development and work towards building a loved and delightful product. A great example of a PLG pioneer is Figma.  ... On the other hand, adopting a product-led growth approach requires fundamental organizational shifts. The success of PLG requires a combined, multidisciplinary team dedicated to continuous improvement and adaptation of the product to support both new customer acquisition as well as retention and growth.


6 tips to implement security gamification effectively

Gamification leverages elements of traditional gaming, online and offline, to boost engagement and investment in the learning process. Points, badges, and leaderboards reward successful actions, fostering a sense of achievement and friendly competition. Engaging scenarios and challenges simulate real-world threats, allowing trainees to apply knowledge practically. Difficulty levels keep learners engaged, while immediate feedback on decisions solidifies learning and highlights areas for improvement. Effective implementation hinges on transparency, simplicity, and a level playing field. A central dashboard that displays the same security data for everyone keeps things simple, fostering a shared understanding of progress. ... Personalized challenges help ensure engagement. New security teams might focus on mastering foundational tasks like vulnerability scans, while seasoned teams tackle advanced challenges like reducing time for response to critical security events. This keeps everyone motivated and learning, while offering continuous improvement for the entire team.


Rethinking ‘Big Data’ — and the rift between business and data ops

Just as data scientists need to think more like businesspeople, so too must businesspeople think more like data scientists. This goes to the issue of occupational identity. Executives need to expand their professional identities to include data. Data professionals need to recognize that DI (changes in information) do not necessarily equate to DB (changes in behavior). Going forward data professionals are not just in the information/insight delivery business, they are in the “create insight that drives value creating behavior” business. The portfolio of tools available now have democratized the practice of data science. One no longer needs to be a math genius or coding phenom to extract value from data — see Becoming a Data Head: How to Think, Speak, and Understand Data Science, Statistics, and Machine Learning by Alex J. Gutman, Jordan Goldmeier. ... Executives need ready access to data professionals to guide their use of data power tools. Data professionals need to be embedded in the business rather than quarantined in specialized data gulags.


The Technical Product Owner

There is a risk that the technical Product Owner or product manager might no longer focus on the “why” but start interfering with the “how,” which is the Developers’ domain. Otherwise, a technical Product Owner might help the Developers understand the long-term business implications of technical decisions made today. ... A technical Product Owner would be highly beneficial when the product involves complex technical requirements or relies heavily on specific technologies. For example, in projects involving intricate software architecture or specialized domain knowledge, a technical Product Owner can provide valuable guidance, facilitate more informed decision-making, and effectively communicate with the Developers. This deep technical understanding can lead to better solutions, improved product quality, and increased customer satisfaction, especially in industries with critical technical expertise, such as software development or engineering. 


The digital transformation divide in Europe’s banking industry

Europe’s digital divide is a product of typical characteristics: internet connectivity, digital literacy, the availability of smartphones and digital devices. Disparities in broadband access in urban and rural communities remain stubbornly persistent. According to Eurostat, around 21% of rural households in the European Union do not have access to broadband internet, compared to only 2% of urban households. In Romania, which ranked lowest on the EU’s Digital Economy and Society Index in 2022, the market is dominated by incumbent banks. Only 69.1% of adults hold a bank account, pointing to low levels of financial literacy and inclusion – underpinned by a preference for a cash economy. In contrast, the UK has a rate of over 60% fintech adoption growth according to data from Tipalti, and Lithuania has established itself as an impressive fintech ecosystem backed by the nation’s central bank. However, it is too simplistic to reduce the digital divide to regional disparities, as the starker differences lie between countries themselves.


Why AMTD Is the Key to Stopping Zero-Day Attacks

AMTD technology uses polymorphism to create a randomized, dynamic runtime memory environment. Deployable on endpoints and servers, this polymorphism ability creates a prevention-focused solution that constantly moves system resources while leaving decoy traps in their place. What occurs next is that threats see these decoy resources where real ones should be and end up trapped. For users, it’s business as usual because as they don't notice any difference—system performance is unaffected while security teams gain a new layer of preventative telemetry. Today, more and more companies are turning to AMTD technologies to defeat zero days. In fact, industry analysts like Gartner suggest that AMTD technology is paving the way for a new era of cyber defense possibilities. That’s because instead of trying to detect zero-day compromise, these technologies prevent exploits from deploying in the first place. Against zero-day attacks, this is the only defensive approach organizations can rely on.



Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas