Showing posts with label refactoring. Show all posts
Showing posts with label refactoring. Show all posts

Daily Tech Digest - May 14, 2025


Quote for the day:

"Success is what happens after you have survived all of your mistakes." -- Anonymous


3 Stages of Building Self-Healing IT Systems With Multiagent AI

Multiagent AI systems can allow significant improvements to existing processes across the operations management lifecycle. From intelligent ticketing and triage to autonomous debugging and proactive infrastructure maintenance, these systems can pave the way for IT environments that are largely self-healing. ... When an incident is detected, AI agents can attempt to debug issues with known fixes using past incident information. When multiple agents are combined within a network, they can work out alternative solutions if the initial remediation effort doesn’t work, while communicating the ongoing process with engineers. Keeping a human in the loop (HITL) is vital to verifying the outputs of an AI model, but agents must be trusted to work autonomously within a system to identify fixes and then report these back to engineers. ... The most important step in creating a self-healing system is training AI agents to be able to learn from each incident, as well as from each other, to become truly autonomous. For this to happen, AI agents cannot be siloed into incident response. Instead, they must be incorporated into an organization’s wider system, communicate with third-party agents and allow them to draw correlations from each action taken to resolve each incident. In this way, each organization’s incident history becomes the training data for its AI agents, ensuring that the actions they take are organization-specific and relevant.


The three refactorings every developer needs most

If I had to rely on only one refactoring, it would be Extract Method, because it is the best weapon against creating a big ball of mud. The single best thing you can do for your code is to never let methods get bigger than 10 or 15 lines. The mess created when you have nested if statements with big chunks of code in between the curly braces is almost always ripe for extracting methods. One could even make the case that an if statement should have only a single method call within it. ... It’s a common motif that naming things is hard. It’s common because it is true. We all know it. We all struggle to name things well, and we all read legacy code with badly named variables, methods, and classes. Often, you name something and you know what the subtleties are, but the next person that comes along does not. Sometimes you name something, and it changes meaning as things develop. But let’s be honest, we are going too fast most of the time and as a result we name things badly. ... In other words, we pass a function result directly into another function as part of a boolean expression. This is… problematic. First, it’s hard to read. You have to stop and think about all the steps. Second, and more importantly, it is hard to debug. If you set a breakpoint on that line, it is hard to know where the code is going to go next.


ENISA launches EU Vulnerability Database to strengthen cybersecurity under NIS2 Directive, boost cyber resilience

The EU Vulnerability Database is publicly accessible and serves various stakeholders, including the general public seeking information on vulnerabilities affecting IT products and services, suppliers of network and information systems, and organizations that rely on those systems and services. ... To meet the requirements of the NIS2 Directive, ENISA initiated a cooperation with different EU and international organisations, including MITRE’s CVE Programme. ENISA is in contact with MITRE to understand the impact and next steps following the announcement of the funding to the Common Vulnerabilities and Exposures Program. CVE data, data provided by Information and Communication Technology (ICT) vendors disclosing vulnerability information through advisories, and relevant information, such as CISA’s Known Exploited Vulnerability Catalogue, are automatically transferred into the EU Vulnerability Database. This will also be achieved with the support of member states, who established national Coordinated Vulnerability Disclosure (CVD) policies and designated one of their CSIRTs as the coordinator, ultimately making the EUVD a trusted source for enhanced situational awareness in the EU. 


Welcome to the age of paranoia as deepfakes and scams abound

Welcome to the Age of Paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a time stamp, proving you are who you claim to be. Some colleagues say they even share code words with each other, so they have a way to ensure they’re not being misled if an encounter feels off. ... Ken Schumacher, founder of the recruitment verification service Ropes, says he’s worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their résumé, such as their favorite coffee shops and places to hang out. If the applicant is actually based in that geographic region, Schumacher says, they should be able to respond quickly with accurate details. Another verification tactic some people use, Schumacher says, is what he calls the “phone camera trick.” If someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to show their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings.


CEOs Sound Alarm: C-Suite Behind in AI Savviness

According to the survey, CEOs now see upskilling internal teams as the cornerstone of AI strategy. The top two limiting factors impacting AI's deployment and use, they said, are the inability to hire adequate numbers of skilled people and to calculate value or outcomes. "CEOs have shifted their view of AI from just a tool to a transformative way of working," said Jennifer Carter, senior principal analyst at Gartner. Contrary to the CEOs' assessments by Gartner, most CIOs view themselves as the key drivers and leaders of their organizations' AI strategies. According to a recent report by CIO.com, 80% of CIOs said they are responsible for researching and evaluating AI products, positioning them as "central figures in their organizations' AI strategies." As CEOs increasingly prioritize AI, customer experience and digital transformation, these agenda items are directly shaping the evolving role and responsibilities of the CIO. But 66% of CEOs say their business models are not fit for AI purposes. Billions continue to be spent on enterprisewide AI use cases but little has come in way of returns. Gartner's forecast predicts a 76.4% surge in worldwide spending on gen AI by 2024, fueled by better foundational models and a global quest for AI-powered everything. But organizations are yet to see consistent results despite the surge in investment. 


Dropping the SBOM, why software supply chains are too flaky

“Mounting software supply chain risk is driving organisations to take action. [There is a] 200% increase in organistions making software supply chain security a top priority and growing use of SBOMs,” said Josh Bressers, vice president of security at Anchore. ... “There’s a clear disconnect between security goals and real-world implementation. Since open source code is the backbone of today’s software supply chains, any weakness in dependencies or artifacts can create widespread risk. To effectively reduce these risks, security measures need to be built into the core of artifact management processes, ensuring constant and proactive protection,” said Douglas. If we take anything from these market analysis pieces, it may be true that organisations struggle to balance the demands of delivering software at speed while addressing security vulnerabilities to a level which is commensurate with the composable interconnectedness of modern cloud-native applications in the Kubernetes universe. ... Alan Carson, Cloudsmith’s CSO and co-founder, remarked, “Without visibility, you can’t control your software supply chain… and without control, there’s no security. When we speak to enterprises, security is high up on their list of most urgent priorities. But security doesn’t have to come at the cost of speed. ...”


Does agentic AI spell doom for SaaS?

The reason agentic AI is perceived as a threat to SaaS and not traditional apps is that traditional apps have all but disappeared, replaced in favor of on-demand versions of former client software. But it goes beyond that. AI is considered a potential threat to SaaS for several reasons, mostly because of how it changes who is in control and how software is used. Agentic AI changes how work gets done because agents act on behalf of users, performing tasks across software platforms. If users no longer need to open and use SaaS apps directly because the agents are doing it for them, those apps lose their engagement and perceived usefulness. That ultimately translates into lost revenue, since SaaS apps typically charge either per user or by usage. An advanced AI agent can automate the workflows of an entire department, which may be covered by multiple SaaS products. So instead of all those subscriptions, you just use an agent to do it all. That can lead to significant savings in software costs. On top of the cost savings are time savings. Jeremiah Stone, CTO with enterprise integration platform vendor SnapLogic, said agents have resulted in a 90% reduction in time for data entry and reporting into the company’s Salesforce system. 


Ask a CIO Recruiter: Where Is the ‘I’ in the Modern CIO Role?

First, there are obviously huge opportunities AI can provide the business, whether it’s cost optimization or efficiencies, so there is a lot of pressure from boards and sometimes CEOs themselves saying ‘what are we doing in AI?’ The second side is that there are significant opportunities AI can enable the business in decision-making. The third leg is that AI is not fully leveraged today; it’s not in a very easy-to-use space. That is coming, and CIOs need to be able to prepare the organization for that change. CIOs need to prepare their teams, as well as business users, and say ‘hey, this is coming, we’ve already experimented with a few things. There are a lot of use cases applied in certain industries; how are we prepared for that?’ ... Just having that vision to see where technology is going and trying to stay ahead of it is important. Not necessarily chasing the shiny new toy,, new technology, but just being ahead of it is the most important skill set. Look around the corner and prepare the organization for the change that will come. Also, if you retrained some of the people, you have to be more analytical, more business minded. Those are good skills. That’s not easy to find. A lot of people [who] move into the CIO role are very technical, whether it is coding or heavily on the infrastructure side. That is a commodity today; you need to be beyond that.


Insider risk management needs a human strategy

A technical-only response to insider risk can miss the mark, we need to understand the human side. That means paying attention to patterns, motivations, and culture. Over-monitoring without context can drive good people away and increase risk instead of reducing it. When it comes to workplace monitoring, clarity and openness matter. “Transparency starts with intentional communication,” said Itai Schwartz, CTO of MIND. That means being upfront with employees, not just that monitoring is happening, but what’s being monitored, why it matters, and how it helps protect both the company and its people. According to Schwartz, organizations often gain employee support when they clearly connect monitoring to security, rather than surveillance. “Employees deserve to know that monitoring is about securing data – not surveilling individuals,” he said. If people can see how it benefits them and the business, they’re more likely to support it. Being specific is key. Schwartz advises clearly outlining what kinds of activities, data, or systems are being watched, and explaining how alerts are triggered. ... Ethical monitoring also means drawing boundaries. Schwartz emphasized the importance of proportionality: collecting only what’s relevant and necessary. “Allow employees to understand how their behavior impacts risk, and use that information to guide, not punish,” he said.


Sharing Intelligence Beyond CTI Teams, Across Wider Functions and Departments

As companies’ digital footprints expand exponentially, so too do their attack surfaces. And since most phishing attacks can be carried out by even the least sophisticated hackers due to the prevalence of phishing kits sold in cybercrime forums, it has never been harder for security teams to plug all the holes, let alone other departments who might be undertaking online initiatives which leave them vulnerable. CTI, digital brand protection and other cyber risk initiatives shouldn’t only be utilized by security and cyber teams. Think about legal teams, looking to protect IP and brand identities, marketing teams looking to drive website traffic or demand generation campaigns. They might need to implement digital brand protection to safeguard their organization’s online presence against threats like phishing websites, spoofed domains, malicious mobile apps, social engineering, and malware. In fact, deepfakes targeting customers and employees now rank as the most frequently observed threat by banks, according to Accenture’s Cyber Threat Intelligence Research. For example, there have even been instances where hackers are tricking large language models into creating malware that can be used to hack customers’ passwords.

Daily Tech Digest - March 23, 2025


Quote for the day:

"Law of Leadership: A successful team with 100 members has 100 leaders." -- Lance Secretan


Citizen Development: The Wrong Strategy for the Right Problem

The latest generation of citizen development offenders are the low-code and no-code platforms that promise to democratize software development by enabling those without formal programming education to build applications. These platforms fueled enthusiasm around speedy app development — especially among business users — but their limitations are similar to the generations of platforms that came before. ... Don't get me wrong — the intentions behind citizen development come from a legitimate place. More often than not, IT needs to deliver faster to keep up with the business. But these tools promise more than they can deliver and, worse, usually result in negative unintended consequences. Think of it as a digital house of cards, where disparate apps combine to create unscalable systems that can take years and/or millions of dollars to fix. ... Struggling to keep up with business demands is a common refrain for IT teams. Citizen development has attempted to bridge the gap, but it typically creates more problems than solutions. Rather than relying on workarounds and quick fixes that potentially introduce security risks and inefficiency — and certainly rather than disintermediating IT — businesses should embrace the power of GenAI to support their developers and ultimately to make IT more responsive and capable.


Researchers Test a Blockchain That Only Quantum Computers Can Mine

The quantum blockchain presents a path forward for reducing the environmental cost of digital currencies. It also provides a practical incentive for deploying early quantum computers, even before they become fully fault-tolerant or scalable. In this architecture, the cost of quantum computing — not electricity — becomes the bottleneck. That could shift mining centers away from regions with cheap energy and toward countries or institutions with advanced quantum computing infrastructure. The researchers also argue that this architecture offers broader lessons. ... “Beyond serving as a proof of concept for a meaningful application of quantum computing, this work highlights the potential for other near-term quantum computing applications using existing technology,” the researchers write. ... One of the major limitations, as mentioned, is cost. Quantum computing time remains expensive and limited in availability, even as energy use is reduced. At present, quantum PoQ may not be economically viable for large-scale deployment. As progress continues in quantum computing, those costs may be mitigated, the researchers suggest. D-Wave machines also use quantum annealing — a different model from the quantum computing platforms pursued by companies like IBM and Google. 


Enterprise Risk Management: How to Build a Comprehensive Framework

Risk objects are the human capital, physical assets, documents and concepts (e.g., “outsourcing”) that pose risk to an organization. Stephen Hilgartner, a Cornell University professor, once described risk objects as “sources of danger” or “things that pose hazards.” The basic idea is that any simple action, like driving a car, has associated risk objects – such as the driver, the car and the roads. ... After the risk objects have been defined, the risk management processes of identification, assessment and treatment can begin. The goal of ERM is to develop a standardized system that not only acknowledges the risks and opportunities in every risk object but also assesses how the risks can impact decision-making. For every risk object, hazards and opportunities must be acknowledged by the risk owner. Risk owners are the individuals managerially accountable for the risk objects. These leaders and their risk objects establish a scope for the risk management process. Moreover, they ensure that all risks are properly managed based on approved risk management policies. To complete all aspects of the risk management process, risk owners must guarantee that risks are accurately tied to the budget and organizational strategy.


Choosing consequence-based cyber risk management to prioritize impact over probability, redefine industrial security

Nonetheless, the biggest challenge for applying consequence-based cyber risk management is the availability of holistic information regarding cyber events and their outcomes. Most companies struggle to gauge the probable damage of attacks based on inadequate historical data or broken-down information systems. This has led to increased adoption of analytics and threat intelligence technologies to enable organizations to simulate the ‘most likely’ outcome of cyber-attacks and predict probable situations. ... “A winning strategy incorporates prevention and recovery. Proactive steps like vulnerability assessments, threat hunting, and continuous monitoring reduce the likelihood and impact of incidents,” according to Morris. “Organizations can quickly restore operations when incidents occur with robust incident response plans, disaster recovery strategies, and regular simulation exercises. This dual approach is essential, especially amid rising state-sponsored cyberattacks.” ... “To overcome data limitations, organizations can combine diverse data sources, historical incident records, threat intelligence feeds, industry benchmarks, and expert insights, to build a well-rounded picture,” Morris detailed. “Scenario analysis and qualitative assessments help fill in gaps when quantitative data is sparse. Engaging cross-functional teams for continuous feedback ensures these models evolve with real-world insights.”


The CTO vs. CMO AI power struggle - who should really be in charge?

An argument can be made that the CTO should oversee everything technical, including AI. Your CTO is already responsible for your company's technology infrastructure, data security, and system reliability, and AI directly impacts all these areas. But does that mean the CTO should dictate what AI tools your creative team uses? Does the CTO understand the fundamentals of what makes good content or the company's marketing objectives? That sounds more like a job for your creative team or your CMO. On the other hand, your CMO handles everything from brand positioning and revenue growth to customer experiences. But does that mean they should decide what AI tools are used for coding or managing company-wide processes or even integrating company data? You see the problem, right? ... Once a tool is chosen, our CTO will step in. They perform their due diligence to ensure our data stays secure, confidential information isn't leaked, and none of our secrets end up on the dark web. That said, if your organization is large enough to need a dedicated Chief AI Officer (CAIO), their role shouldn't be deciding AI tools for everyone. Instead, they're a mediator who connects the dots between teams. 


Why Cyber Quality Is the Key to Security

To improve security, organizations must adopt foundational principles and assemble teams accountable for monitoring safety concerns. Cyber resilience and cyber quality are two pillars that every institution — especially at-risk ones — must embrace. ... Do we have a clear and tested cyber resilience plan to reduce the risk and impact of cyber threats to our business-critical operations? Is there a designated team or individual focused on cyber resilience and cyber quality? Are we focusing on long-term strategies, targeted at sustainable and proactive solutions? If the answer to any of these questions is no, something needs to change. This is where cyber quality comes in. Cyber quality is about prioritization and sustainable long-term strategy for cyber resilience, and is focused on proactive/preventative measures to ensure risk mitigation. This principle is not a marked checkbox on controls that show very little value in the long run. ... Technology alone doesn't solve cybersecurity problems — people are the root of both the challenges and the solutions. By embedding cyber quality into the core of your operations, you transform cybersecurity from a reactive cost center into a proactive enabler of business success. Organizations that prioritize resilience and proactive governance will not only mitigate risks but thrive in the digital age. 


ISO 27001: Achieving data security standards for data centers

Achieving ISO 27001 certification is not an overnight process. It’s a journey that requires commitment, resources, and a structured approach in order to align the organization’s information security practices with the standard’s requirements. The first step in the process is conducting a comprehensive risk assessment. This assessment involves identifying potential security risks and vulnerabilities in the data center’s infrastructure and understanding the impact these risks might have on business operations. This forms the foundation for the ISMS and determines which security controls are necessary. ... A crucial, yet often overlooked, aspect of ISO 27001 compliance is the proper destruction of data. Data centers are responsible for managing vast amounts of sensitive information and ensuring that data is securely sanitized when it is no longer needed is a critical component of maintaining information security. Improper data disposal can lead to serious security risks, including unauthorized access to confidential information and data breaches. ... Whether it's personal information, financial records, intellectual property, or any other type of sensitive data, the potential risks of improper disposal are too great to ignore. Data breaches and unauthorized access can result in significant financial loss, legal liabilities, and reputational damage.


Understanding code smells and how refactoring can help

Typically, code smells stem from a failure to write source code in accordance with necessary standards. In other cases, it means that the documentation required to clearly define the project's development standards and expectations was incomplete, inaccurate or nonexistent. There are many situations that can cause code smells, such as improper dependencies between modules, an incorrect assignment of methods to classes or needless duplication of code segments. Code that is particularly smelly can eventually cause profound performance problems and make business-critical applications difficult to maintain. It's possible that the source of a code smell may cause cascading issues and failures over time. ... The best time to refactor code is before adding updates or new features to an application. It is good practice to clean up existing code before programmers add any new code. Another good time to refactor code is after a team has deployed code into production. After all, developers have more time than usual to clean up code before they're assigned a new task or a project. One caveat to refactoring is that teams must make sure there is complete test coverage before refactoring an application's code. Otherwise, the refactoring process could simply restructure broken pieces of the application for no gain. 


Handling Crisis: Failure, Resilience And Customer Communication

Failure is something leaders want to reduce as much as they can, and it’s possible to design products with graceful failure in mind. It’s also called graceful degradation and can be thought of as a tolerance to faults or faulting. It can mean that core functions remain usable as parts or connectivity fails. You want any failure to cause as little damage or lack of service as possible. Think of it as a stopover on the way to failing safely: When our plane engines fail, we want them to glide, not plummet. ... Resilience requires being on top of it all: monitoring, visibility, analysis and meeting and exceeding the SLAs your customers demand. For service providers, particularly in tech, you can focus on a full suite of telemetry from the operational side of the business and decide your KPIs and OKRs. You can also look at your customers’ perceptions via churn rate, customer lifetime value, Net Promoter Score and so on. ... If you are to cope with the speed and scale of potential technical outages, this is essential. Accuracy, then speed, should be your priorities when it comes to communicating about outages. The more of both, the better, but accuracy is the most important, as it allows customers to make informed choices as they manage the impact on their own businesses.


Approaches to Reducing Technical Debt in Growing Projects

Technical debt, also known as “tech debt,” refers to the extra work developers incur by taking shortcuts or delaying necessary code improvements during software development. Though sometimes these shortcuts serve a short-term goal — like meeting a tight release deadline — accumulating too many compromises often results in buggy code, fragile systems, and rising maintenance costs. ... Massive rewrites can be risky and time-consuming, potentially halting your roadmap. Incremental refactoring offers an alternative: focus on high-priority areas first, systematically refining the codebase without interrupting ongoing user access or new feature development. ... Not all parts of your application contribute to technical debt equally. Concentrate on elements tied directly to core functionality or user satisfaction, such as payment gateways or account management modules. Use metrics like defect density or customer support logs to identify “hotspots” that accumulate excessive technical debt. ... Technical debt often creeps in when teams skip documentation, unit tests, or code reviews to meet deadlines. A clear “definition of done” helps ensure every feature meets quality standards before it’s marked complete.

Daily Tech Digest - December 02, 2024

The end of AI scaling may not be nigh: Here’s what’s next

The concern is that scaling, which has driven advances for years, may not extend to the next generation of models. Reporting suggests that the development of frontier models like GPT-5, which push the current limits of AI, may face challenges due to diminishing performance gains during pre-training. The Information reported on these challenges at OpenAI and Bloomberg covered similar news at Google and Anthropic. This issue has led to concerns that these systems may be subject to the law of diminishing returns — where each added unit of input yields progressively smaller gains. As LLMs grow larger, the costs of getting high-quality training data and scaling infrastructure increase exponentially, reducing the returns on performance improvement in new models. Compounding this challenge is the limited availability of high-quality new data, as much of the accessible information has already been incorporated into existing training datasets. ... While scaling challenges dominate much of the current discourse around LLMs, recent studies suggest that current models are already capable of extraordinary results, raising a provocative question of whether more scaling even matters.


How to talk to your board about tech debt

Instead of opening the conversation about “code quality,” start talking about business outcomes. Rather than discuss “legacy systems,” talk about “revenue bottlenecks,” and replace “technical debt” with “innovation capacity.” When you reframe the conversation this way, technical debt becomes a strategic business issue that directly impacts the value metrics the board cares about most. ... Focus on delivering immediate change in a self-funding way. Double down on automation through AI. Take out costs and use those funds to compress your transformation. ... Here’s where many CIOs stumble: presenting technical debt as a problem that needs to be eliminated. Instead, show how leading companies manage it strategically. Our research reveals that top performers allocate around 15% of their IT budget to debt remediation. This balances debt reduction and prioritizes future strategic innovations, which means committing to continuous updates, upgrades, and management of end-user software, hardware, and associated services. And it translates into an organization that’s stable and innovative. We also found throwing too much money at tech debt can be counterproductive. Our analysis found a distinct relationship between a company’s digital core maturity and technical debt remediation. 


Why You Need More Than A Chief Product Security Officer In The Age Of AI

Security by design means building digital systems and products that have security as their foundation. When building software, a security-by-design approach will involve a thorough risk analysis of the product, considering potential weaknesses that could be exploited by attackers. This is known as threat modeling, and it helps to expand on a desire for "secure" software to ask "security of what?" and "secure from whom?" With these considerations and recommendations, products are designed with the appropriate security controls for the given industry and regulatory environment. To do this well, two teams are needed—the developers and the security team. However, there’s a common misconception that these teams are trained with the same knowledge and skill set to work cohesively. ... As the AI landscape rapidly evolves, businesses must proactively adapt to emerging regulatory requirements; this transformation begins with a fundamental cultural shift. In an era where AI plays a pivotal role in driving innovation, threat modeling should no longer be an afterthought but a pillar of responsible AI leadership. While appointing a chief product security officer is a smart first step, adopting a security-by-design mindset starts by bringing together developer and security teams at the early software design phase.


Enterprise Architecture in 2025 and beyond

The democratisation of AI presents both a challenge and an opportunity for enterprise architects. While generative AI lowers the barrier to entry for coding and data analysis, it also complicates the governance landscape. Organisations must grapple with the reality that, when it comes to skills, anyone can now leverage AI to generate code or analyse data without the traditional oversight mechanisms that have historically been in place. ... The acceleration of technological innovation presents both opportunities and challenges for enterprise architects. With generative AI leading the charge, organisations are compelled to innovate faster than ever before. Yet, this rapid pace raises significant concerns around risk management and regulatory compliance. Enterprise architects must navigate this tension by implementing frameworks that allow for agile innovation while maintaining necessary safeguards. ... In the evolving landscape of EA, the concept of a digital twin of an organisation (DTO) is emerging as a transformative opportunity, and we see this being realised in 2025. ... Outside of 'what-ifs', AI could enable real-time decision-making within DTOs by continuously processing and analysing live data streams. This is particularly valuable for dynamic industries like retail or manufacturing, where market conditions, customer demands, or operational circumstances can shift rapidly.


Clearing the Clouds Around the Shared Responsibility Model

Enterprise leaders need to dig into the documentation for each cloud service they use to understand their organizational responsibilities and to avoid potential gaps and misunderstandings. While there is a definite division of responsibilities, CSPs typically position themselves as partners eager to help their customers uphold their part of cloud security. “The cloud service providers are very interested and invested in their customers understanding the model,” says Armknecht. ... Both parties, customer and provider, have their security responsibilities, but misunderstandings can still arise. In the early days of cloud, the incorrect assumption of automatic security was one of the most common misconceptions enterprise leaders had around cloud. Cloud providers secure the cloud, so any data plunked in the cloud was automatically safe, right? Wrong. ... Even if customers fully understand their responsibilities, they may make mistakes when trying to fulfill them. Misconfigurations are a potential outcome for customers navigating cloud security. It is also possible for misconfigurations to occur on the cloud provider side. “The CIA triad: confidentiality, integrity, and availability. Essentially a misconfiguration or a lack of configuration is going to put one of those things at risk,” says Armknecht. 


Data centers go nuclear for power-hungry AI workloads

AWS, Google, Meta, Microsoft, and Oracle are among the companies exploring nuclear energy. “Nuclear power is a carbon-free, reliable energy source that can complement variable renewable energy sources like wind and solar with firm generation. Advanced nuclear reactors are considered safer and more efficient than traditional nuclear reactors. They can also be built more quickly and in a more modular fashion,” said Amanda Peterson Corio, global head of data center energy at Google. ... “The NRC has, for the last few years, been reviewing both preliminary information and full applications for small modular reactors, including designs that cool the reactor fuel with inert gases, molten salts, or liquid metals. Our reviews have generic schedules of 2 to 3 years, depending on the license or permit being sought,” said Scott Burnell, public affairs officer at the NRC. ... Analysts agree that nuclear is an essential part of a carbon-free, AI-burdened electric grid. “The attraction of nuclear in a world where you’re trying to take the grid to carbon-free energy is that it is really the only proven reliable source of carbon-free energy, one that generates whenever I need it to generate, and I can guarantee that capacity is there, except for the refuel or the maintenance periods,” Uptime Institute’s Dietrich pointed out.


How Banking Leaders Can Enhance Risk and Compliance With AI

On one hand, AI can reduce risk exposure while making regulatory compliance more efficient. AI can also enhance fraud and cybersecurity detection. On the other hand, the complexity of AI models, coupled with concerns around data privacy and algorithmic transparency, requires careful oversight to avoid regulatory pitfalls and maintain customer or member trust. How the industry moves forward will largely depend on pending regulations and the leaps AI science may take, but for now, here is where the current state of affairs lies. ... While AI holds immense potential, its adoption hinges on maintaining account holder confidence. One of the most common concerns expressed by both financial institutions and their account holders is around transparency in AI decision-making. While 73% of financial institutions are convinced that AI can significantly enhance digital account holder experiences, apprehensions about AI’s impact on account holder trust are significant, with 54% expressing concerns over potential negative effects. The concern seems valid, as less than half of consumers feel comfortable with their financial data being processed by AI, even if it gives them a better digital banking experience.


When Prompt Injections Attack: Bing and AI Vulnerabilities

Tricking a chatbot into behaving badly (by “injecting” a cleverly malicious prompt into its input) turns out to be just the beginning. So what should you do when a chatbot tries tricking you back? And are there lessons we can learn — or even bigger issues ahead? ... While erroneous output is often called an AI “hallucination,” Edwards has been credited with popularizing the alternate term “confabulation.” It’s a term from psychology that describes the filling of memory gaps with imaginings. Willison complains that both terms are still derived from known-and-observed human behaviors. But then he acknowledges that it’s probably already too late to stop the trend of projecting humanlike characteristics onto AI. “That ship has sailed…” Is there also a hidden advantage there too? “It turns out, thinking of AIs like human beings is a really useful shortcut for all sorts of things about how you work with them…” “You tell people, ‘Look, it’s gullible.’ You tell people it makes things up, it can hallucinate all of those things. … I do think that the human analogies are effective shortcuts for helping people understand how to use these things and how they work.”


Refactoring AI code: The good, the bad, and the weird

Generative AI is no longer a novelty in the software development world: it’s being increasingly used as an assistant (and sometimes a free agent) to write code running in real-world production. But every developer knows that writing new code from scratch is only a small part of their daily work. Much of a developer’s time is spent maintaining an existing codebase and refactoring code written by other hands. ... “AI-based code typically is syntactically correct but often lacks the clarity or polish that comes from a human developer’s understanding of best practices,” he says. “Developers often need to clean up variable names, simplify logic, or restructure code for better readability.” ... According to Gajjar, “AI tools are known to overengineer solutions so that the code produced is bulkier than it really should be for simple tasks. There are often extraneous steps that developers have to trim off, or a simplified structure must be achieved for efficiency and maintainability.” Nag adds that AI can “throw in error handling and edge cases that aren’t always necessary. It’s like it’s trying to show off everything it knows, even when a simpler solution would suffice.”


How Businesses Can Speed Up AI Adoption

To ensure successful AI adoption, businesses should follow a structured approach that focuses on key strategic steps. First, they should build and curate their organisational data assets. A solid data foundation is crucial for effective AI initiatives, enabling companies to draw meaningful insights that drive accurate AI results and consumer interactions. Next, identifying applicable use cases tailored to specific business needs is essential. This may include generative, visual, or conversational AI applications, ensuring alignment with organisational goals. When investing in AI capabilities, choosing off-the-shelf solutions is advisable, unless there is a compelling business justification for custom development. This allows companies to quickly implement new technologies without accumulating technical debt. Finally, maintaining an active data feedback loop is vital for AI effectiveness. Regularly updating data ensures AI models produce accurate results and helps prevent issues associated with “stale” data, which can hinder performance and limit insights. ... As external pressures such as regulatory changes and shifting consumer expectations create a sense of urgency and complexity, it’s critical that organisations are proactive in overcoming internal obstacles.



Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins

Daily Tech Digest - January 01, 2021

The Financial Services Industry Is About To Feel The Multiplier Effect Of Emerging Technologies

Think about a world where retail banks could send cross-border payments directly to a counterparty without navigating through intermediaries. Instead, you could use a service dedicated to carrying out “Know Your Customer” processes on behalf of the financial services community. The same principle could apply for other transactions. Maybe a single, global fund transfer network is in our future, where any kind of transaction could flow autonomously while sharing only the minimum information necessary, maintaining the privacy of all other personal financial data. ... The technology now exists to massively increase computational power for a range of specific problems, such as simulation and machine learning, by trying all possibilities at once and linking events together. It’s more like the physical phenomena of nature versus the on-or-off switches of ordinary computer calculations. As a result, for instance, an investment bank may no longer have to choose between accuracy and speed when deciding how to allocate collateral across multiple trading desks. It could also give banks a more accurate way to determine how much capital to keep on hand to meet regulations.


The patching conundrum: When is good enough good enough?

Clearly some adjustment is needed on an unknown number of Windows machines. And therein lies the big problem with the Windows ecosystem: Even though we have had Windows for years, it’s still a very vast and messy ecosystem of hardware vendors, multiple drivers, and software vendors that often build their solutions on something undocumented. Microsoft over the years has clamped down on this “wild west” approach and mandated certain developer requirements. It’s one of the main reasons I strongly recommend that if you want to be in the Insider program or install feature releases on the very first day they are released, that you use Windows Defender as your antivirus, and not something from a third party.  While Microsoft will often follow up with a fix for a patch problem, typically — unlike this issue — it is not released in the same fashion as the original update. Case in point: in November, Microsoft released an update that impacted Kerberos authentication and ticket renewal issues. Later last month, on Nov. 19, it released an out-of-band update for the issue. The update was not released to the Windows update release channel, nor on the Windows Software Update Servicing release channel; instead IT administrators had to manually seek it out and download it or insert it into their WSUS servers.


Building a SQL Database Audit System using Kafka, MongoDB and Maxwell's Daemon

Compliance and auditing: Auditors need the data in a meaningful and contextual manner from their perspective. DB audit logs are suitable for DBA teams but not for auditors. The ability to generate critical alerts in case of a security breach are basic requirements of any large scale software. Audit logs can be used for this purpose. You must be able to answer a variety of questions such as who accessed the data, what was the earlier state of the data, what was modified when it was updated, and are the internal users abusing their privileges, etc. It’s important to note that since audit trails help identify infiltrators, they promote deterrence among "insiders." People who know their actions are scrutinized are less likely to access unauthorized databases or tamper with specific data. All kinds of industries - from finance and energy to foodservice and public works - need to analyze data access and produce detailed reports regularly to various government agencies. Consider the Health Insurance Portability and Accountability Act (HIPAA) regulations. HIPAA requires that healthcare providers deliver audit trails about anyone and everyone who touches any data in their records.


How Skillate leverages deep learning to make hiring intelligent

Skillate can work as both as a standalone ATS that takes care of the end-to-end recruitment needs of your organization or as an intelligent system that integrates with your existing ATS to make your recruitment easy, fast, and transparent. And how it does this is by banking on cutting-edge technology and the power of AI to integrate with the existing platforms such as traditional ATSs like Workday, SuccessFactors, etc. to solve some real pain points of the industry. However, for AI to work in a complex industry like recruitment, we need to consider the human element involved. Take for instance the words Skillate and Skillate.com — both these words refer to the same company but will be treated as different words by a machine. Moreover, every day new companies and institute names come up, and thus it is almost impossible to keep the software’s vocabulary updated. To illustrate further, consider the following two statements: 'Currently working as a Data Scientist at <Amazon>’ and, ‘Worked on a project for the client Amazon.’ In the first statement, “Amazon” will be tagged as a company as the statement is about working in the organization. But in the latter “Amazon” should be considered as a normal word and not as a company. Hence the same word can have different meanings based on its usage.


How to Build Cyber Resilience in a Dangerous Atmosphere

The first step to achieving cyber resilience is to start with a fundamental paradigm shift: Expect to be breached, and expect it to happen sooner than later. You are not "too small to be of interest," what you do is not "irrelevant for an attacker," it doesn't matter that there is a "bigger fish in the pond to go after." Your business is interconnected to all the others; it will happen to you. Embrace the shift. Step away from a one-size-fits-all cybersecurity approach. Ask yourself: What parts of the business and which processes are generating substantial value? Which must continue working, even when suffering an attack, to stay in business? Make plans to provide adequate protection — but also for how to stay operational if the digital assets in your critical processes become unavailable. Know your most important assets, and share this information among stakeholders. If your security admin discovers a vulnerability on a server with IP address 172.32.100.100 but doesn't know the value of that asset within your business processes, how can IT security properly communicate the threat? Would a department head fully understand the implications of a remote code execution (RCE) attack on that system? 


A New Product Aims To Disrupt Free Credit Scores With Blockchain Technology

The foundation of Zoracles Protocol that differentiates the project from other decentralized finance projects is its use of cutting-edge privacy technologies centered around zero-knowledge proofs. Those familiar with these privacy-preserving techniques were most likely introduced to these concepts by the team at Electric Coin Company who are responsible for the zero-knowledge proofs developed for the privacy cryptocurrency Zcash. Zoracles will build Zk-Snarks that are activated when pulling consumer credit scores yet hiding their values as they are brought onto the blockchain. This is accomplished with a verification proof derived from the ZoKrates toolbox. Keeping the data confidential is critical to ensure confidence from users to have their data available on-chain. It can be compared to using https (SSL) to transmit credit card data that allowed eCommerce to flourish.A very interesting long-term goal of Zora.cc is to eventually use credit score verification to prove identity. The implications are enormous for the usefulness of their protocol if it can become the market leader in decentralized identity. The team is focused on building the underlying API infrastructure as well as a front-end user experience. If executed successfully, it is very similar to the product offering of Twilio. The “Platform as a Service” could go well with Zoracles “Snarks as a Service.” One should watch this project closely.


Refactoring is a Development Technique, Not a Project

One of the more puzzling misconceptions that I hear pertains to the topic of refactoring. I consult on a lot of legacy rescue efforts that will need to involve refactoring, and people in and around those efforts tend to think of “refactor” as “massive cleanup effort.” I suspect this is one of those conflations that happens subconsciously. If you actually asked some of these folks whether “refactor” and “massive cleanup effort” were synonyms, they would say no, but they never conceive of the terms in any other way during their day to day activities. Let’s be clear. Here is the actual definition of refactoring, per wikipedia. Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior. Significantly, this definition mentions nothing about the scope of the effort. Refactoring is changing the code without changing the application’s behavior. This means the following would be examples of refactoring, provided they changed nothing about the way the system interacted with external forces: Renaming variables in a single method; Adding whitespace to a class for readability; Eliminating dead code; Deleting code that has been commented out; and Breaking a large method apart into a few smaller ones.


Automation nation: 9 robotics predictions for 2021

"Autonomous robots took on more expansive roles in stores and warehouses during the pandemic," says Rowland, "which is expected to gain momentum in 2021. Data-collecting robots shared real-time inventory updates and accurate product location data with mobile shopping apps, online order pickers and curbside pickup services along with in-store shoppers and employees." That's especially key in large retail environments, with hundreds of thousands of items, where the ability to pinpoint products is a major productivity booster. Walmart recently cut its contract with robotic shelf scanning company Bossa Nova, but Rowland believes the future is bright for the technology category. Heretofore, automation solutions have largely been task-specific. That could be a thing of the past, according to Rowland. "Autonomous robots can easily handle different duties, often referred to as 'payloads,' which are programmed to address varying requirements, including but not limited to, inventory management, hazard detection, security checks, surface disinfectants, etc. In the future, retailers will have increased options for mixing/matching automated workflows to meet specific operational needs." Remember running out of toilet paper? So do retailers and manufacturers, and it was a major wake up call.


Data for development: Revisiting the non-personal data governance framework

The framework needs to be reimagined from multiple perspectives. From the ground up, people — individuals and communities — must control their data and it should not be just considered a resource to fuel “innovation.” More specifically, data sharing of any sort needs to be anchored in individual data protection and privacy. The purpose for data sharing must be clear from the outset, and data should only be collected to answer clear, pre-defined questions. Further, individuals must be able to consent dynamically to the collection/use of their data, and to grant and withdraw consent as needed. At the moment, the role of the individual is limited to consenting to anonymise their personal data, which is seen as a sufficient condition for subsequent data sharing without consent. Collectives have a significant role to play in negotiating better rights in the data economy. Bottom up instruments such as data cooperatives, unions, and trusts that allow individual users to pool their data rights must be actively encouraged. There is also a need to create provisions for collectives — employees, public transport users social media networks — to sign on to these instruments to enable collective bargaining on data rights.


3 things you need to know as an experienced software engineer

When we are in a coding competition where the clock is ticking, all we care about is efficiency. We will be using variable names such as a, b, c, or index names such as j, k, l. Putting less attention to naming can save us a lot of time, and we will probably throw the code right after the upload passed all the test sets. These are called the “throw-away code”. These codes are short and as the name suggests — they won’t be kept for too long. In a real-life software engineering project, however, our code will likely be reused and modified, and that person may be someone other than ourselves, or ourselves but after 6 months of working on a different module. ... Readability is so important that sometimes we even sacrifice efficiency for it. We will probably choose the less readable but extremely efficient lines of code when working on projects that aim to be optimized within several CPU cycles and limited memory space, such as the control system running on a microprocessor. However, in many of the real-life scenarios we care much less about that millisecond difference on a modern computer. But writing more readable code will cause much less trouble for our teammates.



Quote for the day:

"Leadership does not always wear the harness of compromise." -- Woodrow Wilson

Daily Tech Digest - November 11, 2020

The Role of Relays In Big Data Integration

The very nature of big data integration requires an organization to become more flexible in some ways; particularly when gathering input and metrics from such varied sources as mobile apps, browser heuristics, A / V input, software logs, and more. The number of different methodologies, protocols, and formats that your organization needs to ingest while complying with both internal and government-mandated standards can be staggering. ... What if, instead of just allowing all of that data to flow in from dozens of information silos, you introduced a set of intelligent buffers? Imagine that each of these buffers was purpose-built for the kind of input that you needed to receive at any given time: Shell scripts, REST APIs, federated DB’s, hashed log files, and the like. Let’s call these intelligent buffers what they really are: Relays. They ingest SSL encrypted data, send out additional queries as needed, and provide fault-tolerant data access according to ACL’s specific to the team and server-side apps managing that dataset. If you were to set up such a distributed relay architecture to deal with your big data integration chain, it might look something like this


Malware Hidden in Encrypted Traffic Surges Amid Pandemic

Ransomware attacks delivered via SSL/TLS channels soared 500% between March and September, with a plurality of the attacks (40.5%) targeted at telecommunication and technology companies. Healthcare organizations were targeted more so than entities in other verticals and accounted for 1.6 billion, or over 25%, of all SSL-based attacks Zscaler blocked this year. Finance and insurance companies clocked in next with 1.2 billion or 18% of attacks blocked, and manufacturing organizations were the third-most targeted, with some 1.1 billion attacks directed against them. Deepen Desai, CISO and vice president of security research at Zscaler, says the trend shows why security groups need to be wary about encrypted traffic traversing their networks. While many organizations routinely encrypt traffic as part of their security best practices, fewer are inspecting it for threats, he says. "Most people assume that encrypted traffic means safe traffic, but that is unfortunately not the case," Desai says. "That false sense of security can create risk when organizations allow encrypted traffic to go uninspected."


Shadow IT: The Risks and Benefits That Come With It

Covid-19-induced acceleration of remote work has led to employees being somewhat lax about cybersecurity. Shadow IT might make business operations easier – and many companies certainly have been needing that in the last few months – but from the cybersecurity point of view, it also brings about more risks. If your IT team doesn’t know about an app or a cloud system that you’re using in your work, they can’t be responsible for any consequences of such usage. This includes those impacting the infrastructure of the entire organization. The responsibility falls on you to ensure the security of your company’s data whilst using the shadow IT app. Otherwise, your entire organization is at risk. It’s also easy to lose your data if your Shadow IT systems don’t back stuff up. If they’re your only method of storage and something goes wrong, you could potentially lose all your valuable data. If you work in government, healthcare, banking, or another heavily regulated center, chances are that you have local normative acts regulating your IT usage. It’s likely that your internal systems wouldn’t even allow you to access certain websites or apps. 


Refactoring Java, Part 2: Stabilizing your legacy code and technical debt

Technical debt is code with problems that can be improved with refactoring. The technical debt metaphor is that it’s like monetary debt. When you borrow money to purchase something, you must pay back more money than you borrowed; that is, you pay back the original sum and interest. When someone writes low-quality code or writes code without first writing automated tests, the organization incurs technical debt, and someone has to pay interest, at some point, for the debt that’s due. The organization’s interest payments aren’t necessarily in money. The biggest cost is the loss of technical agility, since you can’t update or otherwise change the behavior of the software as quickly as needed. And less technical agility means the organization has less business agility: The organization can’t meet stakeholders’ needs at the desired speed. Therefore, the goal is to refactor debt-ridden code. You’re taking the time to fix the code to improve technical and business agility. Now let’s start playing with the Gilded Rose kata’s code and see how to stabilize the code, while preparing to add functionality quickly in an agile way. One huge main problem with legacy code is that someone else wrote it. 


Interactive Imaging Technologies in the Wolfram Mathematica

A lot of mathematical problems that can be solved using computer algebra systems are constantly expanding. Considerable efforts of researchers are directed to the development of algorithms for calculating topological invariants of manifolds, knots, calculating topological invariants of manifolds of knots of algebraic curves, cohomology of various mathematical objects, arithmetic invariants of rings of integer elements in fields of algebraic numbers. Another example of ​​modern research is quantum algorithms, which sometimes have polynomial complexity, while existing classical algorithms have exponential complexity. Computer algebra is represented by theory, technology, software. The applied results include the developed algorithms and software for solving problems using a computer, in which the initial data and results are in the form of mathematical expressions, formulas. The main product of computer algebra has become computer algebra software systems. There are a lot of systems in this category, many publications are devoted to them, systematic updates are published with the presentation of the capabilities of new versions.


EU to introduce data-sharing measures with US in weeks

Companies will be able to use the assessment to decide whether they want to use a data transfer mechanism, and whether they need to introduce additional safeguards, such as encryption, to mitigate any data protection risks, said Gencarelli. The EC is expected to offer companies “non-exhaustive” and “non-prescriptive” guidance on the factors they should take into account. This includes the security of computer systems used, whether data is encrypted and how organisations will respond to requests from the US or other government law enforcement agencies for access to personal data on EU citizens. Gencarelli said relevant questions would include: What do you do as a company when you receive an access request? How do you review it? When do you challenge it – if, of course, you have grounds to challenge it? Companies may also need to assess whether they can use data minimisation principles to ensure that any data on EU citizens they hand over in response to a legitimate request by a government is compliant with EU privacy principles. The guidelines, which will be open for public consultation, will draw on the experience of companies that have developed best practices for SCCs and of civil society organisations.


Unlock the Power of Omnichannel Retail at the Edge

The Edge exists wherever the digital world and physical world intersect, and data is securely collected, generated, and processed to create new value. According to Gartner, by 2025, 75 percent6 of data will be processed at the Edge. For retailers, Edge technology means real-time data collection, analytics and automated responses where they matter most — on the shop floor, be that physical or virtual. And for today’s retailers, it’s what happens when Edge computing is combined with Computer Vision and AI that is most powerful and exciting, as it creates the many opportunities of omnichannel shopping. With Computer Vision, retailers enter a world of powerful sensor-enabled cameras that can see much more than the human eye. Combined with Edge analytics and AI, Computer Vision can enable retailers to monitor, interpret, and act in real-time across all areas of the retail environment. This type of vision has obvious implications for security, but for retailers it also opens up huge possibilities in understanding shopping behavior and implementing rapid responses. For example, understanding how customers flow through the store, and at what times of the day, can allow the retailer to put more important items directly in their paths to be more visible. 


4 Methods to Scale Automation Effectively

An essential element of the automation toolkit is the value-determination framework, which guides the identification and prioritization of automation opportunity decisions. However, many frameworks apply such a heavy weighting to cost reduction that other value dimensions are rendered meaningless. Evaluate impacts beyond savings to capture other manifestations of value; this will expand the universe of automation opportunities and appeal to more potential internal consumers. Benefits such as improving quality, reducing errors, enhancing speed of execution, liberating capacity to work on more strategic efforts, and enabling scalability should be appropriately considered, incorporated, and weighted in your prioritization framework. Keep in mind that where automation drives the greatest value changes over time depending on both evolving organizational priorities and how extensive the reach of the automation program has been. Periodically reevaluate the value dimensions of your framework and their relative weightings to determine whether any changes are merited. Typically, nascent automation programs take an “inside-out” approach to developing capability, where the COE is established first and federation is built over time as ownership and participation extends radially out to business functions and/or IT. 


Digital transformation: 5 ways to balance creativity and productivity

One of the biggest challenges is how to ensure that creative thinking is an integral part of your program planning and development. Creativity is fueled by knowledge and experience. It’s therefore important to make time for learning, whether that’s through research, reading the latest trade publication, listening to a podcast, attending a (virtual) event, or networking with colleagues. It’s all too easy to dismiss this as a distraction and to think “I haven’t got time for that” because you can’t see an immediate output. But making time to expand your horizons will do wonders for your creative thinking. ... However, the one thing we initially struggled with was how to keep being innovative. We were used to being together in the same room, bouncing ideas off one another, and brainstorms via video call just didn’t have the same impact. However, by applying some simple techniques such as interactive whiteboards and prototyping through demos on video platforms, we’ve managed to restore our creative energy. To make it through the pandemic, companies have had to think outside the box, either by looking at alternative revenue streams or adapting their existing business model. Businesses have proved their ability to make decisions, diversify at speed, and be innovative. 


Google Open-Sources Fast Attention Module Performer

The Transformer neural-network architecture is a common choice for sequence learning, especially in the natural-language processing (NLP) domain. It has several advantages over previous architectures, such as recurrent neural-networks (RNN); in particular, the self-attention mechanism that allows the network to "remember" previous items in the sequence can be executed in parallel on the entire sequence, which speeds up training and inference. However, since self-attention can link each item in the sequence to every other item, the computational and memory complexity of self-attention is O(N2)O(N2), where N is the maximum sequence length that can be processed. This puts a practical limit on sequence length of around 1,024 items, due to the memory constraints of GPUs. The original Transformer attention mechanism is implemented by a matrix of size NxN, followed by a softmax operation; the rows and columns represent queries and keys, respectively. The attention matrix is multiplied by the input sequence to output a set of similarity values. Performer's FAVOR+ algorithm decomposes the matrix into two matrices which contain "random features": random non-linear functions of the queries and keys. 



Quote for the day:

"Don't let your future successes be prisoners of your past failure, shape the future you want." -- Gordon Tredgold