Showing posts with label honeypot. Show all posts
Showing posts with label honeypot. Show all posts

Daily Tech Digest - September 17, 2025


Quote for the day:

“We are all failures - at least the best of us are.” -- J.M. Barrie


AI Governance Reaches an Inflection Point

AI adoption has made privacy, compliance, and risk management dramatically more complex. Unlike traditional software, AI models are probabilistic, adaptive, and capable of generating outcomes that are harder to predict or explain. As Blake Brannon, OneTrust’s chief innovation officer, summarized: “The speed of AI innovation has exposed a fundamental mismatch. While AI projects move at unprecedented speed, traditional governance processes are operating at yesterday’s pace.” ... These dynamics explain why, several years ago, Dresner Advisory Services shifted its research lens from data governance to data and analytics (D&A) governance. AI adoption makes clear that organizations must treat governance not as a siloed discipline, but as an integrated framework spanning data, analytics, and intelligent systems. D&A governance is broader in scope than traditional data governance. It encompasses policies, standards, decision rights, procedures, and technologies that govern both data and analytic content across the organization. ... The modernization is not just about oversight — it is about rethinking priorities. Survey respondents identify data quality and controlled access as the most critical enablers of AI success. Security, privacy, and the governance of data models follow closely behind. Collectively, these priorities reflect an emerging consensus: The real foundation of successful AI is not model architecture, but disciplined, transparent, and enforceable governance of data and analytics.


Shai-Hulud Supply Chain Attack: Worm Used to Steal Secrets, 180+ NPM Packages Hit

The packages were injected with a post-install script designed to fetch the TruffleHog secret scanning tool to identify and steal secrets, and to harvest environment variables and IMDS-exposed cloud keys. The script also validates the collected credentials and, if GitHub tokens are identified, it uses them to create a public repository and dump the secrets into it. Additionally, it pushes a GitHub Actions workflow that exfiltrates secrets from each repository to a hardcoded webhook, and migrates private repositories to public ones labeled ‘Shai-Hulud Migration’. ... What makes the attack different is malicious code that uses any identified NPM token to enumerate and update the packages that a compromised maintainer controls, to inject them with the malicious post-install script. “This attack is a self-propagating worm. When a compromised package encounters additional NPM tokens in a victim environment, it will automatically publish malicious versions of any packages it can access,” Wiz notes. ... The security firm warns that the self-spreading potential of the malicious code will likely keep the campaign alive for a few more days. To avoid being infected, users should be wary of any packages that have new versions on NPM but not on GitHub, and are advised to pin dependencies to avoid unexpected package updates.


Scattered Spider Tied to Fresh Attacks on Financial Services

The financial services sector appears to remain at high risk of attack by the group. Over the past two months, elements of Scattered Spider registered "a coordinated set of ticket-themed phishing domains and Salesforce credential harvesting pages" designed to target the financial services sector as well as providers of technology services, suggesting a continuing focus on those sectors, ReliaQuest said. Registering lookalike domain names is a repeat tactic used by many attackers, from Chinese nation-state groups to Scattered Spider. Such URLs are designed to trick victims into thinking a link that they visit is legitimate. ... Members of Scattered Spider and ShinyHunters excel at social engineering, including voice phishing, aka vishing. This often involves tricking a help desk into believing the attacker is a legitimate employee, leading to passwords being reset and single sign-on tokens intercepted. In some cases, experts say, the attackers trick a victim into visiting lookalike support panels they've created which are part of a phishing attack. Since the middle of the year, members of Scattered Spider have breached British retailers Marks & Spencer, followed by American retailers such as Adidas and Victoria's Secret. The group has been targeting American insurers such as Aflac and Allianz Life, global airlines including Air France, KLM and Qantas, and technology giants Cisco and Google.


Tech’s Tarnished Image Spurring Rise of Chief Trust Officers

In today’s highly competitive world, organizations need every advantage they can get, which can include trust. “Part of selecting vendors, whether it is an official part of the process or not, is evaluating the trust you have in that vendor,” explained Erich Kron ... “By signifying someone in a high level of leadership as the person responsible and accountable for culminating and maintaining that level of trust, the organization may gain significant competitive advantages through loyalty and through competitive means,” he told TechNewsWorld. “The chief trust officer role is a visible, external and internal sign of an organization’s commitment to trust,” added Jim Alkove. ... “It’s an explicit statement of intent to your employees, to your customers, to your partners, to governments that your company cares so much about trust and that you’ve announced that there’s a leader responsible for it,” Alkove, a former CTrO at Salesforce, told TechNewsWorld. ... Forrester noted that trust has become a revenue problem for B2B software companies, and CTrOs provide a means to resolve issues that could stall deals and impact revenue. “When procurement and third-party risk management teams identified issues with a business partner’s cybersecurity posture, contracts stalled,” the report explained. “These issues reflected on the competence, consistency, and dependability of the potential partner. Chief trust officers and their teams step in to remove those obstacles and move deals along.”


AI ROI Isn't About Cost Savings Anymore

The traditional metrics of ROI, including cost savings, headcount reduction and revenue uplift, are no longer sufficient. Let's start with the obvious challenge: ROI today is often measured vertically, at the use-case or project level, tracking model accuracy or incremental sales. Although necessary, this vertical lens misses the broader picture. What's needed is a horizontal perspective on ROI - metrics that capture how investments in cloud infrastructure, data engineering and cross-silo integration accelerate every subsequent AI initiative. ... When data is cleaned and standardized for one use case, the next model development becomes faster and more reliable. Yet these productivity gains rarely appear in ROI calculations. The same applies to interoperability across functions. For example, predictive models developed for finance may inform HR or marketing strategies, multiplying AI's value in ways traditional KPIs overlook. ... Emerging models, such as Gartner's multidimensional AI measurement frameworks, and India's evolving AI governance standards offer early guidance. But turning them into practice requires rigor - from assessing how data improvements accelerate downstream use cases to quantifying cross-team synergies, and even recognizing softer outcomes like trust and employee well-being. "AI is neither hype nor savior - it is a tool," Gupta said.


How a fake ICS network can reveal real cyberattacks

Most ICS honeypots today are low interaction, using software to simulate devices like programmable logic controllers (PLCs). These setups are useful for detecting basic threats but are easy for skilled attackers to identify. Once attackers realize they are interacting with a decoy, they stop revealing their tactics. ... ICSLure takes a different approach. It combines actual PLC hardware with realistic simulations of physical processes, such as the movement of machinery on a factory floor. This creates what the researchers call a very high interaction environment. For attackers, ICSLure feels like a live industrial network. For defenders, it provides more accurate data about how adversaries move inside an ICS environment and the techniques they use to disrupt operations. Angelo Furfaro, one of the researchers behind ICSLure, told Help Net Security that deploying this type of environment safely requires careful planning. “The honeypot infrastructure must be completely segregated from any production network through dedicated VLANs, firewalls, and demilitarized zones, ensuring that malicious activity cannot spill over into critical operations,” he said. “PLCs should only interact with simulated plants or digital twins, eliminating the possibility of executing harmful commands on physical processes.”


The Biggest Barriers Blocking Agentic AI Adoption

To achieve the critical mass of adoption needed to fuel mainstream adoption of AI agents, we have to be able to trust them. This is true on several levels; we have to trust them with the sensitive and personal data they need to make decisions on our behalf, and we have to trust that the technology works, our efforts aren’t hampered by specific AI flaws like hallucinations. And if we are trusting it to make serious decisions, such as buying decisions, we have to trust that it will make the right ones and not waste our money. ... Another problem is that agentic AI relies on the ability of agents to interact and operate with third-party systems, and many third-party systems aren’t set up to work with this yet. Computer-using agents (such as OpenAI Operator and Manus AI) circumvent this by using computer vision to understand what’s on a screen. This means they can use many websites and apps just like we can, whether or not they’re programmed to work with them. ... Finally, there are wider cultural concerns that go beyond technology. Some people are uncomfortable with the idea of letting AI make decisions for them, regardless of how routine or mundane those decisions may be. Others are nervous about the impact that AI will have on jobs, society or the planet. These are all totally valid and understandable concerns and can’t be dismissed as barriers to be overcome simply through top-down education and messaging.


The Legal Perils of Dark Patterns in India: Intersection between Data Privacy and Consumer Protection

Dark patterns are any deceptive design pattern using UI or UX that misleads or tricks users by subverting their autonomy and manipulating them into taking actions which otherwise they would not have taken. Coined by UX designer Harry Brignull, who registered a website called darkpatterns.org, which he intended to be designed like a library wherein all types of such UX/UI designs are showcased in public interest, hence the name “dark pattern” came into being. ... The CCPA can order for recall goods, withdraw services or even stop such services in instance it finds that an entity is engaging in dark pattern as per Section 20 of the CP Act, in instance of breach of guidelines. ... By their very design, some patterns harm the user in two ways: first, by manipulating them into choices they would not have otherwise made; and second, by compelling the collection or processing of personal data in ways that breach data protection requirements. In such cases, the entity is not only exploiting the individual but is also failing to meet its legal duties under the DPDPA thereby creating exposure under both the CP Act and the DPDPA. ... Under the DPDPA, the stakes are now significantly higher. The Data Protection Board of India has the authority to impose financial penalties of up to Rs 50 crores for not obtaining purposeful consent or for disregarding technical and organisational measures.


In Order to Scale AI with Confidence, Enterprise CTOs Must Unlock the Value of Unstructured Data

Over the past two years, we’ve witnessed rapid advancements in Large Language Models (LLMs). As these models become increasingly powerful–and more commoditized–the true competitive edge for enterprises will lie in how effectively they harness their internal data. Unstructured content forms the foundation of modern AI systems, making it essential for organizations to build strong unstructured data infrastructure to succeed in the AI-driven era. This is what we mean by an unstructured data foundation: the ability for companies to rapidly identify what unstructured data exists across the organization, assess its quality, sensitivity, and safety, enrich and contextualize it to improve AI performance, and ultimately create a governed system for generating and maintaining high-quality data products at scale. In 2025, unstructured data is as much about quality as it is about quantity. “Quality” in the context of unstructured data remains largely uncharted territory. Companies need clear frameworks to assess dimensions like relevance, freshness, and duplication. Over the past six years, the volume and variety of unstructured data–and the number of AI applications that generate or depend on it–have exploded. Many have called it the largest and most valuable source of data within an organization, and I’d agree–especially as AI becomes increasingly central to how enterprises operate. Here’s why.


Scaling Databases for Large Multi-Tenant Applications

Building and maintaining multi-tenant database applications is one of the more challenging aspects of being a developer, administrator or analyst. Until the debut of AI systems, with their power-hungry GPUs, database workloads represented the most expensive workloads because of their demands on memory, CPU and storage performance to work effectively. ... Sharding is a data management technique that effectively partitions data across multiple databases. At its center, you need something that I like to call a command and control database. Still, I've also seen it called a shard-map manager or a router database. This database contains the metadata around the shards and your environment, and routes application calls to the appropriate shard or database. ... If you are working on the Microsoft stack, I'm going to give a shout out to elastic database tools . This .NET library gives you all the tools like shard-map management, the ability to do data-dependent routing, and doing multi-shard queries as needed. Additionally, consider the ability to add and remove shards to match shifting demands. ... Some other tooling you need to think about in planning, are how to execute schema changes across your partitions. Database DevOps is a mature practice, but rolling out changes across is fleet of databases requires careful forethought and operations. 

Daily Tech Digest - February 05, 2025


Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." --Philippos


Neural Networks – Intuitively and Exhaustively Explained

The process of thinking within the human brain is the result of communication between neurons. You might receive stimulus in the form of something you saw, then that information is propagated to neurons in the brain via electrochemical signals. The first neurons in the brain receive that stimulus, then each neuron may choose whether or not to "fire" based on how much stimulus it received. "Firing", in this case, is a neurons decision to send signals to the neurons it’s connected to. ... Neural networks are, essentially, a mathematically convenient and simplified version of neurons within the brain. A neural network is made up of elements called "perceptrons", which are directly inspired by neurons. ... In AI there are many popular activation functions, but the industry has largely converged on three popular ones: ReLU, Sigmoid, and Softmax, which are used in a variety of different applications. Out of all of them, ReLU is the most common due to its simplicity and ability to generalize to mimic almost any other function. ... One of the fundamental ideas of AI is that you can "train" a model. This is done by asking a neural network (which starts its life as a big pile of random data) to do some task. Then, you somehow update the model based on how the model’s output compares to a known good answer.


Why honeypots deserve a spot in your cybersecurity arsenal

In addition to providing critical threat intelligence for defenders, honeypots can often serve as helpful deception techniques to ensure attackers focus on decoys instead of valuable and critical organizational data and systems. Once malicious activity is identified, defenders can use the findings from the honeypots to look for indicators of compromise (IoC) in other areas of their systems and environments, potentially catching further malicious activity and minimizing the dwell time of attackers. In addition to threat intelligence and attack detection value, honeytokens often have the benefit of having minimal false positives, given they are highly customized decoy resources deployed with the intent of not being accessed. This contrasts with broader security tooling, which often suffers from high rates of false positives from low-fidelity alerts and findings that burden security teams and developers. ... Enterprises need to put some thought into the placement of the honeypots. It is common for them to be used in environments and systems that may be potentially easier for attackers to access, such as publicly exposed endpoints and systems that are internet accessible, as well as internal network environments and systems. The former, of course, is likely to get more interaction and provide broader generic insights. 


IoT Technology: Emerging Trends Impacting Industry And Consumers

An emerging IoT trend is the rise of emotion-aware devices that use sensors and artificial intelligence to detect human emotions through voice, facial expressions or physiological data. For businesses, this opens doors to hyper-personalized customer experiences in industries like retail and healthcare. For consumers, it means more empathetic tech—think stress-relieving smart homes or wearables that detect and respond to anxiety. ... The increasing prevalence of IoT tech means that it is being increasingly deployed into “less connected” environments. As a result, the user experience needs to be adapted so that it’s not wholly dependent on good connectivity—instead, priorities must include how to gracefully handle data gaps and robust fallbacks with missing control instructions. ... IoT systems can now learn user preferences, optimizing everything from home automation to healthcare. For businesses, this means deeper customer engagement and loyalty; for consumers, it translates to more intuitive, seamless interactions that enhance daily life. ... While not a newly emerging trend, the Industrial Internet of Things is an area of focus for manufacturers seeking greater efficiency, productivity and safety. Connecting machines and systems with a centralized work management platform gives manufacturers access to real-time data. 


When digital literacy fails, IT gets the blame

By insisting that requisite digital skills and system education are mastered before a system cutover occurs, the CIO assumes a leadership role in the educational portion of each digital project, even though IT itself may not be doing the training. Where IT should be inserting itself is in the area of system skills training and testing before the system goes live. The dual goals of a successful digital project should be two-fold: a system that’s complete and ready to use; and a workforce that’s skilled and ready to use it. ... IT business analysts, help desk personnel, IT trainers, and technical support personnel all have people-helping and support skills that can contribute to digital education efforts throughout the company. The more support that users have, the more confidence they will gain in new digital systems and business processes — and the more successful the company’s digital initiatives will be. ... Eventually, most of the technical glitches were resolved, and doctors, patients, and support medical personnel learned how to integrate virtual visits with regular physical visits and with the medical record system. By the time the pandemic hit in 2019, telehealth visits were already well under way. These visits worked because the IT was there, the pandemic created an emergency scenario, and, most importantly, doctors, patients, and medical support personnel were already trained on using these systems to best advantage.


What you need to know about developing AI agents

“The success of AI agents requires a foundational platform to handle data integration, effective process automation, and unstructured data management,” says Rich Waldron, co-founder and CEO of Tray.ai. “AI agents can be architected to align with strict data policies and security protocols, which makes them effective for IT teams to drive productivity gains while ensuring compliance.” ... One option for AI agent development comes directly as a service from platform vendors that use your data to enable agent analysis, then provide the APIs to perform transactions. A second option is from low-code or no-code, automation, and data fabric platforms that can offer general-purpose tools for agent development. “A mix of low-code and pro-code tools will be used to build agents, but low-code will dominate since business analysts will be empowered to build their own solutions,” says David Brooks, SVP of Evangelism at Copado. “This will benefit the business through rapid iteration of agents that address critical business needs. Pro coders will use AI agents to build services and integrations that provide agency.” ... Organizations looking to be early adopters in developing AI agents will likely need to review their data management platforms, development tools, and smarter devops processes to enable developing and deploying agents at scale.


The Path of Least Resistance to Privileged Access Management

While PAM allows organizations to segment accounts, providing a barrier between the user’s standard access and needed privileged access and restricting access to information that is not needed, it also adds a layer of internal and organizational complexity. This is because of the impression it removes user’s access to files and accounts that they have typically had the right to use, and they do not always understand why. It can bring changes to their established processes. They don’t see the security benefit and often resist the approach, seeing it as an obstacle to doing their jobs and causing frustration amongst teams. As such, PAM is perceived to be difficult to introduce because of this friction. ... A significant gap in the PAM implementation process lies in the lack of comprehensive awareness among administrators. They often do not have a complete inventory of all accounts, the associated access levels, their purposes, ownership, or the extent of the security issues they face. ... Consider a scenario where a company has a privileged Windows account with access to 100 servers. If PAM is instructed to discover the scope of this Windows account, it might only identify the servers that have been accessed previously by the account, without revealing the full extent of its access or the actions performed.


Quantum networking advances on Earth and in space

“The most established use case of quantum networking to date is quantum key distribution — QKD — a technology first commercialized around 2003,” says Monga. “Since then, substantial advancements have been achieved globally in the development and production deployment of QKD, which leverages secure quantum channels to exchange encryption keys, ensuring data transfer security over conventional networks.” Quantum key distribution networks are already up and running, and are being used by companies, he says, in the U.S., in Europe, and in China. “Many commercial companies and startups now offer QKD products, providing secure quantum channels for the exchange of encryption keys, which ensures the safe transfer of data over traditional networks,” he says. Companies offering QKD include Toshiba, ID Quantique, LuxQuanta, HEQA Security, Think Quantum, and others. One enterprise already using a quantum network to secure communications is JPMorgan Chase, which is connecting two data centers with a high-speed quantum network over fiber. It also has a third quantum node set up to test next-generation quantum technologies. Meanwhile, the need for secure quantum networks is higher than ever, as quantum computers get closer to prime time.


What are the Key Challenges in Mobile App Testing?

One of the major issues in mobile app testing is the sheer variety of devices in the market. With numerous models, each having different screen sizes, pixel densities, operating system (OS) versions and hardware specifications, ensuring the app is responsive across all devices becomes a task. Testing for compatibility on every device and OS can be tiresome and expensive. While tools like emulators and cloud-based testing platforms can help, it remains essential to conduct tests on real devices to ensure accurate results. ... In addition to device fragmentation, another key challenge is the wide range of OS versions. A device may run one version of an OS while another runs on a different version, leading to inconsistencies in app performance. Just like any other software, mobile apps need to function seamlessly across multiple OS versions, including Android, iPhone Operating System (iOS) and other platforms. Furthermore, OS are updated frequently, which can cause apps to break or not function. ... Mobile app users interact with apps under various network conditions, including Wi-Fi, 4G, 5G or limited connectivity. Testing how an app performs in different network conditions is crucial to ensure it does not hang or load slowly when the connection is weak. 


Reimagining KYC to Meet Regulatory Scrutiny

Implementing AI and ML allows KYC to run in the background rather than having staff manually review information as they can, said Jennifer Pitt, senior analyst for fraud and cybersecurity with Javelin Strategy & Research. “This allows the KYC team to shift to other business areas that require more human interaction like investigations,” Pitt said. Yet use of AI and ML remains low at many banks. Currently, fraudsters and cybercriminals are using generative adversarial networks - machine learning models that create new data that mirrors a training set - to make fraud less detectable. Fraud professionals should leverage generative adversarial networks to create large datasets that closely mirror actual fraudulent behavior. This process involves using a generator to create synthetic transaction data and a discriminator to distinguish between real and synthetic data. By training these models iteratively, the generator improves its ability to produce realistic fraudulent transactions, allowing fraud professionals to simulate emerging fraud types and account takeovers, and enhance detection models’ sensitivity to these evolving threats. Instead of waiting to gather sufficient historical data from known fraudulent behaviors, GANs enable a more proactive approach, helping fraud teams quickly understand new fraud trends and patterns, Pitt said.


How Agentic AI Will Transform Banking (and Banks)

Agentic AI has two intertwined vectors. For banks, one path is internal, and focused on operational efficiency for tasks including the automation of routine data entry and compliance and regulatory checks, summaries of email and reports, and the construction of predictive models for trading and risk management to bolster insights into market dynamics, fraud and credit and liquidity risk. The other path is consumer facing, and revolves around managing customer relationships, from automated help desks staffed by chatbots to personalized investment portfolio recommendations. Both trajectories aim to improve efficiency and reduce costs. Agentic AI "could have a bigger impact on the economy and finance than the internet era," Citigroup wrote in a January 2025 report that calls the technology the "Do It For Me" Economy. ... Meanwhile, automated AI decisions could inadvertently violate laws and regulations on consumer protection, anti-money laundering or fair lending laws. Agentic AI that can instruct an agent to make a trade based on bad data or assumptions could lead to financial losses and create systemic risk within the banking system. "Human oversight is still needed to oversee inputs and review the decisioning process," Davis says. 

Daily Tech Digest - November 30, 2023

Super apps: the next big thing for enterprise IT?

Enterprise super apps will allow employers to bundle the apps employees use under one umbrella, he said. This will create efficiency and convenience, where different departments can select only the apps they want, much like a marketplace, to customize their working experiences. Other advantages of super apps for enterprises include providing a more consistent user experience, combating app fatigue and app sprawl, and enhancing security by consolidating functions into one company-managed app. Gartner analyst Jason Wong said the analyst firm is seeing interest in super apps from organizations, including big box stores and other retailers, that have a lot of frontline workers who rely on their mobile devices to do their jobs. One company that has adopted a super app to enhance the experience of its frontline workers and other employees is TeamHealth, a leading physician practice in the US. TeamHealth is using an employee super app from MangoApps, which unifies all the tools and resources employees use daily within one central app.


Meta faces GDPR complaint over processing personal data without 'free consent'

The case centres on whether Meta can legitimately claim to have obtained free consent from its customers to process their data, as required under GDPR, when the only alternative is for customers to pay a substantial fee to opt out of ad-tracking. The complaint will be watched closely by social media companies such as TikTok, which are reported to be considering offering ad-free services to customers outside the US to meet the requirements of European data protection law. Meta denied that it was in breach of European data protection law, citing a European Court of Justice ruling in July 2023 which it said expressly recognised that a subscription model was a valid form of consent for an ad-funded service. Spokesman Matt Pollard referred to a blog post announcing Meta’s subscription model, which stated, “The option for people to purchase a subscription for no ads balances the requirements of European regulators while giving users choice and allowing Meta to continue serving all people in the EU, EEA and Switzerland”.


India’s Path to Cyber Resilience Through DevSecOps

DevSecOps, a collaborative methodology between development, security, and operations, places a strong emphasis on integrating security practices into the software development and deployment processes. In India, the approach has gained substantial traction due to several reasons, including a security-first mindset, adherence to compliance requirements and escalating cybersecurity threats. A survey revealed that the primary business driver for DevSecOps adoption is a keen focus on business agility, achieved through the rapid and frequent delivery of application capabilities, as reported by 59 per cent of the respondents. From a technological perspective, the most significant factor is the enhanced management of cybersecurity threats and challenges, a factor highlighted by 57 per cent of the participants. Businesses now understand the importance of proactive security measures. DevSecOps encourages a security-first mentality, ensuring that security is an integral part of the development process from the outset.


Cybersecurity and Burnout: The Cybersecurity Professional's Silent Enemy

In the world of cybersecurity, where digital threats are a constant, the mental health of professionals is an invaluable asset. Mindfulness not only emerges as a shield against the stress and burnout that pose security risks to organizations, but it also becomes a key strategy to reduce the costs associated with lost productivity and staff turnover. By adopting mindfulness practices and preventing burnout, cybersecurity professionals not only preserve their well-being, but also contribute to a healthier work environment, improve the responsiveness and effectiveness of cybersecurity teams, and ensure the continued success of companies in this critical technology field. Cybersecurity challenges are multidimensional. They cannot be managed in only one dimension. Mindfulness is an essential tool to keep us one step ahead. By recognizing the value of emotional well-being in the fight against cyberattacks, we can build a stronger and more sustainable defense. Cybersecurity is not only a technical issue, but also a human one, and mindfulness presents itself as a key piece in this intricate security puzzle.


Will AI replace Software Engineers?

While AI is automating some tasks previously done by devs, it’s not likely to lead to widespread job losses. In fact, AI is creating new job opportunities for software engineers with the skills and expertise to work with AI. According to a 2022 report by the McKinsey Global Institute, AI is expected to create 9 million new jobs in the United States by 2030. The jobs that are most likely to be lost to AI are those that are routine and repetitive, such as data entry and coding. However, software engineers with the skills to work with AI will be in high demand. ... Embrace AI as a tool to enhance your skills and productivity as a software engineer. While there's concern about AI replacing software engineers, it's unlikely to replace high-value developers who work on complex and innovative software. To avoid being replaced by AI, focus on building sophisticated and creative solutions. Stay up-to-date with the latest AI and software engineering developments, as this field is constantly evolving. Adapt to the changing landscape by acquiring new skills and techniques. Remember that AI and software engineering can collaborate effectively, as AI complements human skills. 


Bridging the risk exposure gap with strategies for internal auditors

Without a strategic view of the future — including a clear-eyed assessment of strengths, weaknesses, opportunities, threats, priorities, and areas of leakage — internal audit is unlikely to recognize actions needed to enable success. There is no bigger threat to organizational success than a misalignment between exponentially increasing risks and a failure to respond due to a lack of vision, resources, or initiative. Create and maintain a good, well-documented strategic plan for your internal audit function. This can help you organize your thinking, force discipline in definitions, facilitate implementation, and continue asking the right questions. Nobody knows for certain what lies ahead, and a well-developed strategic plan is a key tool for preparing for chaos and ambiguity. ... Companies may have less time than they think to prepare for compliance, and internal auditors should be supporting their organizations in getting the right enabling processes and technologies in place as soon as possible. This will require a continuing focus on breaking down silos and improving how internal audit collaborates with its risk and compliance colleagues. 


Generative AI in the Age of Zero-Trust

Enter generative AI. Generative AI models generate content, predictions, and solutions based on vast amounts of available data. They’re making waves not just for their ‘wow’ factor, but for their practical applications. It’s only natural that employees would gravitate to the latest technology offering the ability to make them more efficient. For cybersecurity, this means potential tools that offer predictive threat analysis based on patterns, provide automatic code fixes, dynamically adjust policies in response to evolving threat landscapes and even automatically respond to active attacks. If used correctly, generative AI can shoulder some of the burdens of the complexities that have built up over the course of the zero-trust era. But how can you trust generative AI if you are not in control of the data that trains it? You can’t, really. ... This is forcing organizations to start setting generative AI policies. Those that choose the zero-trust path and ban its use will only repeat the mistakes of the past. Employees will find ways around bans if it means getting their job done more efficiently. Those who harness it will make a calculated tradeoff between control and productivity that will keep them competitive in their respective markets.


Organizations Must Embrace Dynamic Honeypots to Outpace Attackers

There are a number of ways in which AI-powered honeypots are superior to their static counterparts. The first is that because they can independently evolve, they can become far more convincing through automatic evolution. This sidesteps the problem of constantly making manual adjustments to present the honeypot as a realistic facsimile. Secondly, as the AI learns and develops, it will become far more adept at planting traps for unwary attackers, meaning that hackers will not only have to go slower than usual to try and avoid said traps but once one is triggered, it will likely provide far richer data to defense teams about what attackers are clicking on, the information they’re after, how they’re moving across the site. Finally, using AI tools to design honeypots means that, under the right circumstances, even tangible assets can be turned into honeypots. ... Therefore, having tangible assets such as honeypots allows defense teams to target their energy more efficiently and enables the AI to learn faster, as there will likely be more attackers coming after a real asset than a fake one.


Almost all developers are using AI despite security concerns, survey suggests

Many developers place far too much trust in the security of code suggestions from generative AI, the report noted, despite clear evidence that these systems consistently make insecure suggestions. “The way that code is generated by generative AI coding systems like Copilot and others feels like magic," Maple said. "When code just appears and functionally works, people believe too much in the smoke and mirrors and magic because it appears so good.” Developers can also value machine output over their own talents, he continued. "There’s almost an imposter syndrome," he said. ... Because AI coding systems use reinforcement learning algorithms to improve and tune results when users accept insecure open-source components embedded in suggestions, the AI systems are more likely to label those components as secure even if this is not the case, it continued. This risks the creation of a feedback loop where developers accept insecure open-source suggestions from AI tools and then those suggestions are not scanned, poisoning not only their organization’s application code base but the recommendation systems for the AI systems themselves, it explained.


Former Uber CISO Speaks Out, After 6 Years, on Data Breach, SolarWinds

Sullivan says the key mistake he made was not bringing in third-party investigators and counsel to review how his team handled the breach. "The thing we didn't do was insist that we bring in a third party to validate all of the decisions that were made," he says. "I hate to say it, but it's more CYA." Now, Sullivan advises other CISOs and companies about navigating their responsibilities in disclosing breaches, especially as the new Securities & Exchange Commission (SEC) incident reporting requirements are set to take effect. Sullivan says he welcomes the new regulations. "I think anything that pushes towards more transparency is a good thing," he says. He recalls that when he was on former President Barack Obama's Commission on Enhancing National Cybersecurity, Sullivan was pushing to give companies immunity if they are transparent early on during security incidents. That hasn't happened until now, according to Sullivan, who says the jury is still out on the new regulations, which will require action starting in December.



Quote for the day:

"The distance between insanity and genius is measured only by success." -- Bruce Feirstein

Daily Tech Digest - December 26, 2021

All You Need to Know About Unsupervised Reinforcement Learning

Unsupervised learning can be considered as the approach to learning from the huge amount of unannotated data and reinforcement learning can be considered as the approach to learning from the very low amount of data. A combination of these learning methods can be considered as unsupervised reinforcement learning which is basically a betterment in reinforcement learning. In this article, we are going to discuss unsupervised Reinforcement learning in detail along with special features and application areas. ... When we talk about the basic process followed by unsupervised learning, we define objective functions on it such that the process can be capable of performing categorization of the unannotated data or unlabeled data. There are various problems that can be dealt with using unsupervised learning. Some of them are as follows: Label creation, annotation, and maintenance is a changing discipline that also requires a lot of time and effort; Many domains require expertise in annotation like law, medicine, ethics, etc; In reinforcement learning, reward annotation is also confusing. 


Explainable AI (XAI) Methods Part 1 — Partial Dependence Plot (PDP)

Partial Dependence (PD) is a global and model-agnostic XAI method. Global methods give a comprehensive explanation on the entire data set, describing the impact of feature(s) on the target variable in the context of the overall data. Local methods, on the other hand, describes the impact of feature(s) on an observation level. Model-agnostic means that the method can be applied to any algorithm or model. Simply put, PDP shows the marginal effect or contribution of individual feature(s) to the predictive value of your black box model ... Unfortunately, PDP is not some magic wand that you can waver in any occasion. It has a major assumption that is made. The so-called assumption of independence is the biggest issue with PD plots. ... “If the feature for which you computed the PDP is not correlated with the other features, then the PDPs perfectly represent how the feature influences the prediction on average. In the uncorrelated case, the interpretation is clear: The partial dependence plot shows how the average prediction in your dataset changes when the j-th feature is changed.”


Worried about super-intelligent machines? They are already here

For anyone who thinks that living in a world dominated by super-intelligent machines is a “not in my lifetime” prospect, here’s a salutary thought: we already live in such a world! The AIs in question are called corporations. They are definitely super-intelligent, in that the collective IQ of the humans they employ dwarfs that of ordinary people and, indeed, often of governments. They have immense wealth and resources. Their lifespans greatly exceed that of mere humans. And they exist to achieve one overriding objective: to increase and thereby maximise shareholder value. In order to achieve that they will relentlessly do whatever it takes, regardless of ethical considerations, collateral damage to society, democracy or the planet. One such super-intelligent machine is called Facebook. ... “We connect people. Period. That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we have to do to bring more communication in. The work we will likely have to do in China some day. All of it.”


Supervised vs. Unsupervised vs. Reinforcement Learning: What’s the Difference?

Reinforcement learning is a technique that provides training feedback using a reward mechanism. The learning process occurs as a machine, or Agent, that interacts with an environment and tries a variety of methods to reach an outcome. The Agent is rewarded or punished when it reaches a desirable or undesirable State. The Agent learns which states lead to good outcomes and which are disastrous and must be avoided. Success is measured with a score (denoted as Q, thus reinforcement learning is sometimes called Q-learning) so that the Agent can iteratively learn to achieve a higher score. Reinforcement learning can be applied to the control of a simple machine like a car driving down a winding road. The Agent would observe its current State by taking measurements such as current speed, direction relative to the road, and distances to the sides of the road. The Agent can take actions that change its state like turning the wheel or applying the gas or brakes.


Quantum computers: Eight ways quantum computing is going to change the world

Discovering new drugs takes so long: scientists mostly adopt a trial-and-error approach, in which they test thousands of molecules against a target disease in the hope that a successful match will eventually be found. Quantum computers, however, have the potential to one day resolve the molecular simulation problem in minutes. The systems are designed to be able to carry out many calculations at the same time, meaning that they could seamlessly simulate all of the most complex interactions between particles that make up molecules, enabling scientists to rapidly identify candidates for successful drugs. This would mean that life-saving drugs, which currently take an average 10 years to reach the market, could be designed faster -- and much more cost-efficiently. Pharmaceutical companies are paying attention: earlier this year, healthcare giant Roche announced a partnership with Cambridge Quantum Computing (CQC) to support efforts in research tackling Alzheimer's disease.


What is a honeypot crypto scam and how to spot it?

Even though it looks like a part of the network, it is isolated and monitored. Because legitimate users have no motive to access a honeypot, all attempts to communicate with it are regarded as hostile. Honeypots are frequently deployed in a network's demilitarized zone (DMZ). This strategy separates it from the leading production network while keeping it connected. A honeypot in the DMZ may be monitored from afar while attackers access it, reducing the danger of a compromised main network. To detect attempts to infiltrate the internal network, honeypots can be placed outside the external firewall, facing the internet. The actual location of the honeypot depends on how intricate it is, the type of traffic it wants to attract and how close it is to critical business resources. It will always be isolated from the production environment, regardless of where it is placed. Logging and viewing honeypot activity provides insight into the degree and sorts of threats that a network infrastructure confronts while diverting attackers' attention away from real-world assets.


From DeFi to NFTs to metaverse, digital assets revolution is remaking the world

This decentralised concept offers both opportunities and challenges. How could a system work among a group of participants—there could be bad apples—if they were given the option of pseudonymity? Who will update the ledger? How will we reach a uniform version of truth? Bitcoin solved a lot of the long-standing issues with cryptographic consensus methods with a combination of private and public keys, and carefully aligned economic incentives. Suppose User A wants to transfer 1 bitcoin to User B. The transaction data would be authenticated, verified, and moved to the ‘mempool’ (memory pool is a holding room for all unconfirmed transactions), where they will be collected in groups or ‘blocks’. One block becomes one entry in the Bitcoin ledger, and around 3,000 transactions will appear in one block. The ledger would be updated every 10 minutes, and the system would converge on the latest single version of truth. The next big question is, who in the system gets to write the next entry in the ledger? That is where the consensus protocol comes into play.


The privacy dangers of web3 and DeFi – and the projects trying to fix them

Less discussed is the impact of web3 and DeFi on user privacy. Proponents argue that web3 will improve user privacy by putting individuals in control of their data, via distributed personal data stores. But critics say that the transparent nature of public distributed ledgers, which make transactions visible to all participants, is antithetical to privacy. “Right now, web3 requires you to give up privacy entirely,” Tor Bair, co-founder of private blockchain The Secrecy Network, tweeted earlier this year. “NFTs and blockchains are all public-by-default and terrible for ownership and security.” Participants in public blockchains don’t typically need to make their identities known, but researchers have demonstrated how transactions recorded on a blockchain could be linked to individuals. A recent paper by researchers at browser maker Brave and Imperial College London found that many DeFi apps incorporate third-party web services that can access the users’ Ethereum addresses. “We find that several DeFi sites rely on third parties and occasionally even leak your Ethereum address to those third parties – mostly to API and analytics providers,” the researchers wrote.


The Importance of People in Enterprise Architecture

All the employees in the organization should have a shared understanding of the overarching future state and be empowered to update the future state for their part of the whole. The communication and democratization discussed in the AS-IS section is also necessary for TO-BE. People need regular, self-service access to a continuously evolving future-state architecture description. Each person should have access that provides views specific to that person, their role, and links to other people who will collaborate to promote that understanding and evolve the design. Progressive companies are moving away from the plan-build-run mentality and this is changing the role of the architecture review board (ARB) that is operated by a central EA team. These traditionally act as a bureaucratic toll-gate, performing their role after the design is finished to ensure all system qualities are accounted for and the design is aligned with the future state approach. However, democratizing the enterprise architecture role and sharing design autonomy now requires collaboration on the initial phase of design at the start of an increment. This collaboration is to ensure the reasoning behind the enterprise-wide future-state is understood, and the desired system qualities are carefully evaluated.


The Best Way to Manage Unstructured Data Efficiently

A lot of people seem to place a lot of focus on data analysis techniques and machine learning models when building a high-quality ML production pipeline. However, what a lot of people miss is that storage is one of the most important aspects of your pipeline. This is because the pipeline has 3 main components: collecting data, storing it, and consuming it. Effective storage methods do not only boost storage capabilities but also help in more efficient collection and consumption. The ease of searching with customizable metadata is available in object storage and helps in doing both of those. Not only do you want to choose the correct storage tech, but you also want to choose the correct provider. AWS comes to mind as one of the best object storage providers mainly because its infrastructure provides smooth service and ease of scaling. Furthermore, for effective consumption of data, there must be a software layer that runs on top of this storage for data aggregation and collection purposes. This is also an important choice and needs to be discussed in another article dedicated to the topic.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - November 25, 2019

Avoiding the pitfalls of operating a honeypot

honey jar dripper
Operators of honeypots sometimes desire to trick the hacker into downloading phone-home and other technologies for purposes of identifying the hacker and/or better tracking their movements. Understand that downloading programming and other technology onto someone’s systems or attempting to access their systems without their knowledge or consent almost certainly violates state and federal anti-hacking laws – even if done in the context of cyber security. Penalties for these activities can be substantial and harsh. Never engage in such activities without the involvement and direction of law enforcement. ... Except for interactions with law enforcement, uses of personally identifiable information should be strictly avoided. Only aggregated or de-identified information should be used, particularly in the context of any published reports or statistics regarding operation of the honeypot. ... The law regarding entrapment is complicated, but if someone creates a situation intended solely to snare a wrongdoer, there is the potential for an argument this constitutes entrapment. In such a case, law enforcement may decline to take action on information gained from the honeypot.


Exploit code published for dangerous Apache Solr remote code execution flaw

Apache Solr
At the time it was reported, the Apache Solr team didn't see the issue as a big deal, and developers thought an attacker could only access (useless) Solr monitoring data, and nothing else. Things turned out to be much worse when, on October 30, a user published proof-of-concept code on GitHub showing how an attacker could abuse the very same issue for "remote code execution" (RCE) attacks. The proof-of-concept code used the exposed 8983 port to enable support for Apache Velocity templates on the Solr server and then used this second feature to upload and run malicious code. A second, more refined proof-of-concept code was published online two days later, making attacks even easier to execute. It was only after the publication of this code that the Solr team realized how dangerous this bug really was. On November 15, they issued an updated security advisory. In its updated alert, the Solr team recommended that Solr admins set the ENABLE_REMOTE_JMX_OPTS option in the solr.in.sh config file to "false" on every Solr node and then restart Solr.



Stateful Serverless: Long-Running Workflows with Durable Functions

There are a few reasons the workload doesn’t appear to be a good fit for Azure Functions at first glance. It runs relatively long (the example was just part of the game; an entire game may take hours or days). In addition, it requires state to keep track of the game in progress. Azure Functions by nature are stateless. They are designed to be quickly run self-contained transactions. Any concept of state must be managed using cache, storage, or database. If only the function could be suspended while waiting for asynchronous actions to complete and maintain its state when resumed. The Durable Task Framework is an open source library that was written to manage state and control flow for long-running workflows. Durable Functions build on the framework to provide the same support for serverless functions. In addition to facilitating potential cost savings for longer running workflows, it opens a new set of patterns and possibilities for serverless applications. To illustrate these patterns, I created the Durable Dungeon. This article is based on a presentation I first gave at NDC Oslo.


The Edge of Test Automation: DevTestOps and DevSecOps

On the edge
DevTestOps allows developers, testers, and operation engineers to work together in a similar environment. Apart from running test cases, DevTestOps also involves writing test scripts, automation, manual, and exploratory testing. In the past few years, DevOps and automation testing strategies have received a lot of appreciation because teams were able to develop and deliver products in the minimum time possible. But, many organizations soon realized that without continuous testing, DevOps provide an incomplete delivery of software that might be full of bugs and issues. And that’s why DevTestOps was introduced. Now, DevTestOps is growing in popularity because it improves the relationship between the team members involved in a software development process. It not only helps in faster delivery of products but also provides high-quality software. And when the software is released, automated test cases are already stored in it for future releases.


Q&A with Tyler Treat on Microservice Observability

A common misstep I see is companies chasing tooling in hopes that it will solve all of their problems. "If we get just one more tool, things will get better." Similarly, seeking a "single pane of glass" is usually a fool’s errand. In reality, what the tools do is provide different lenses through which to view things. The composite of these is what matters, and there isn’t a single tool that solves all problems. But while tools are valuable, they aren’t the end of the story. As with most things, it starts with culture. You have to promote a culture of observability. If teams aren’t treating instrumentation as a first-class concern in their systems, no amount of tooling will help. Worse yet, if teams aren’t actually on-call for the systems they ship to production, there is no incentive for them to instrument at all. This leads to another common mistake, which is organizations simply renaming an Operations team to an Observability team. This is akin to renaming your Ops engineers to DevOps engineers thinking it will flip some switch. 


8 ways to prepare your data center for AI’s power draw

2 data center servers
Existing data centers might be able to handle AI computational workloads but in a reduced fashion, says Steve Conway, senior research vice president for Hyperion Research. Many, if not most, workloads can be operated at half or quarter precision rather than 64-bit double precision. “For some problems, half precision is fine,” Conway says. “Run it at lower resolution, with less data. Or with less science in it.” Double-precision floating point calculations are primarily needed in scientific research, which is often done at the molecular level. Double precision is not typically used in AI training or inference on deep learning models because it is not needed. Even Nvidia advocates for use of single- and half-precision calculations in deep neural networks. AI will be a part of your business but not all, and that should be reflected in your data center. “The new facilities that are being built are contemplating allocating some portion of their facilities to higher power usage,” says Doug Hollidge, a partner with Five 9s Digital, which builds and operates data centers. “You’re not going to put all of your facilities to higher density because there are other apps that have lower draw.”


Kubernetes meets the real world

Kubernetes meets the real world
Kubernetes is enabling enterprises of all sizes to improve their developer velocity, nimbly deploy and scale applications, and modernize their technology stacks. For example, the online retailer Ocado, which has been delivering fresh groceries to UK households since 2000, has built its own technology platform to manage logistics and warehouses. In 2017, the company decided to start migrating its Docker containers to Kubernetes, taking its first application into production in the summer of 2017 on its own private cloud. The big benefits of this shift for Ocado and others have been much quicker time-to-market and more efficient use of computing resources. At the same time, Kubernetes adopters also tend to cite the same drawback: The learning curve is steep, and although the technology makes life easier for developers in the long run, it doesn’t make life less complex. Here are some examples of large global companies running Kubernetes in production, how they got there, and what they have learned along the way.


HP to Xerox: We don't need you, you're a mess


The HP Board of Directors has reviewed and considered your November 21 letter, which has provided no new information beyond your November 5 letter. We reiterate that we reject Xerox's proposal as it significantly undervalues HP. Additionally, it is highly conditional and uncertain. In particular, there continues to be uncertainty regarding Xerox's ability to raise the cash portion of the proposed consideration and concerns regarding the prudence of the resulting outsized debt burden on the value of the combined company's stock even if the financing were obtained. Consequently, your proposal does not constitute a basis for due diligence or negotiation. We believe it is important to emphasize that we are not dependent on a Xerox combination. We have great confidence in our strategy and the numerous opportunities available to HP to drive sustainable long-term value, including the deployment of our strong balance sheet for increased share repurchases of our significantly undervalued stock and for value-creating M&A.


A new era of cyber warfare: Russia’s Sandworm shows “we are all Ukraine” on the internet

Cyber warfare  >  Russian missile launcher / Russian flag / binary code
This was “the kind of destructive act on the power grid we've never seen before, but we've always dreaded.” Even more concerning, “what happens in Ukraine we'll assume will happen to the rest of us too because Russia is using it as a test lab for cyberwar. That cyberwar will sooner or later spill out to the West,” Greenberg said. “When you make predictions like this, you don't really want them to come true.” Sandworm’s adversarial attacks did spill out to the West in its next big attack, the NotPetya malware, which swept across continents in June 2017 causing untold damage in Europe and the United States, but mostly in Ukraine. NotPetya, took down “300 Ukrainian companies and 22 banks, four hospitals that I'm aware of, multiple airports, pretty much every government agency. It was a kind of a carpet bombing of the Ukrainian internet, but it did immediately spread to the rest of the world fulfilling [my] prediction far more quickly than I would have ever wanted it to,” Greenberg said. The enormous financial costs of NotPetya are still unknown, but for companies that have put a price tag on the attack, the figures are staggering. 


Lessons Learned in Performance Testing


To remind ourselves, throughput is basically counting the number of operations done per some period of time (a typical example is operations per second). Latency, also known as response time, is the time from the start of the execution of the operation to receiving the answer. These two basic metrics of system performance are usually connected to each other. In a non-parallel system, latency is actually an inverse of throughput and vice versa. This is very intuitive - if I do 10 operations per second, one operation is (on average) taking 1/10 second. If I do more operations in one second, the single operation has to take less time. Intuitive. However, this intuition can easily break in a parallel system. As an example, just consider adding another request handling thread to the webserver. You’re not shortening the single operation time, hence latency stays (at best) the same, however, you double the throughput. From the example above, it’s clear that throughput and latency are essentially two different metrics of a system. Thus, we have to test them separately.



Quote for the day:


"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis