Showing posts with label dark patterns. Show all posts
Showing posts with label dark patterns. Show all posts

Daily Tech Digest - September 17, 2025


Quote for the day:

“We are all failures - at least the best of us are.” -- J.M. Barrie


AI Governance Reaches an Inflection Point

AI adoption has made privacy, compliance, and risk management dramatically more complex. Unlike traditional software, AI models are probabilistic, adaptive, and capable of generating outcomes that are harder to predict or explain. As Blake Brannon, OneTrust’s chief innovation officer, summarized: “The speed of AI innovation has exposed a fundamental mismatch. While AI projects move at unprecedented speed, traditional governance processes are operating at yesterday’s pace.” ... These dynamics explain why, several years ago, Dresner Advisory Services shifted its research lens from data governance to data and analytics (D&A) governance. AI adoption makes clear that organizations must treat governance not as a siloed discipline, but as an integrated framework spanning data, analytics, and intelligent systems. D&A governance is broader in scope than traditional data governance. It encompasses policies, standards, decision rights, procedures, and technologies that govern both data and analytic content across the organization. ... The modernization is not just about oversight — it is about rethinking priorities. Survey respondents identify data quality and controlled access as the most critical enablers of AI success. Security, privacy, and the governance of data models follow closely behind. Collectively, these priorities reflect an emerging consensus: The real foundation of successful AI is not model architecture, but disciplined, transparent, and enforceable governance of data and analytics.


Shai-Hulud Supply Chain Attack: Worm Used to Steal Secrets, 180+ NPM Packages Hit

The packages were injected with a post-install script designed to fetch the TruffleHog secret scanning tool to identify and steal secrets, and to harvest environment variables and IMDS-exposed cloud keys. The script also validates the collected credentials and, if GitHub tokens are identified, it uses them to create a public repository and dump the secrets into it. Additionally, it pushes a GitHub Actions workflow that exfiltrates secrets from each repository to a hardcoded webhook, and migrates private repositories to public ones labeled ‘Shai-Hulud Migration’. ... What makes the attack different is malicious code that uses any identified NPM token to enumerate and update the packages that a compromised maintainer controls, to inject them with the malicious post-install script. “This attack is a self-propagating worm. When a compromised package encounters additional NPM tokens in a victim environment, it will automatically publish malicious versions of any packages it can access,” Wiz notes. ... The security firm warns that the self-spreading potential of the malicious code will likely keep the campaign alive for a few more days. To avoid being infected, users should be wary of any packages that have new versions on NPM but not on GitHub, and are advised to pin dependencies to avoid unexpected package updates.


Scattered Spider Tied to Fresh Attacks on Financial Services

The financial services sector appears to remain at high risk of attack by the group. Over the past two months, elements of Scattered Spider registered "a coordinated set of ticket-themed phishing domains and Salesforce credential harvesting pages" designed to target the financial services sector as well as providers of technology services, suggesting a continuing focus on those sectors, ReliaQuest said. Registering lookalike domain names is a repeat tactic used by many attackers, from Chinese nation-state groups to Scattered Spider. Such URLs are designed to trick victims into thinking a link that they visit is legitimate. ... Members of Scattered Spider and ShinyHunters excel at social engineering, including voice phishing, aka vishing. This often involves tricking a help desk into believing the attacker is a legitimate employee, leading to passwords being reset and single sign-on tokens intercepted. In some cases, experts say, the attackers trick a victim into visiting lookalike support panels they've created which are part of a phishing attack. Since the middle of the year, members of Scattered Spider have breached British retailers Marks & Spencer, followed by American retailers such as Adidas and Victoria's Secret. The group has been targeting American insurers such as Aflac and Allianz Life, global airlines including Air France, KLM and Qantas, and technology giants Cisco and Google.


Tech’s Tarnished Image Spurring Rise of Chief Trust Officers

In today’s highly competitive world, organizations need every advantage they can get, which can include trust. “Part of selecting vendors, whether it is an official part of the process or not, is evaluating the trust you have in that vendor,” explained Erich Kron ... “By signifying someone in a high level of leadership as the person responsible and accountable for culminating and maintaining that level of trust, the organization may gain significant competitive advantages through loyalty and through competitive means,” he told TechNewsWorld. “The chief trust officer role is a visible, external and internal sign of an organization’s commitment to trust,” added Jim Alkove. ... “It’s an explicit statement of intent to your employees, to your customers, to your partners, to governments that your company cares so much about trust and that you’ve announced that there’s a leader responsible for it,” Alkove, a former CTrO at Salesforce, told TechNewsWorld. ... Forrester noted that trust has become a revenue problem for B2B software companies, and CTrOs provide a means to resolve issues that could stall deals and impact revenue. “When procurement and third-party risk management teams identified issues with a business partner’s cybersecurity posture, contracts stalled,” the report explained. “These issues reflected on the competence, consistency, and dependability of the potential partner. Chief trust officers and their teams step in to remove those obstacles and move deals along.”


AI ROI Isn't About Cost Savings Anymore

The traditional metrics of ROI, including cost savings, headcount reduction and revenue uplift, are no longer sufficient. Let's start with the obvious challenge: ROI today is often measured vertically, at the use-case or project level, tracking model accuracy or incremental sales. Although necessary, this vertical lens misses the broader picture. What's needed is a horizontal perspective on ROI - metrics that capture how investments in cloud infrastructure, data engineering and cross-silo integration accelerate every subsequent AI initiative. ... When data is cleaned and standardized for one use case, the next model development becomes faster and more reliable. Yet these productivity gains rarely appear in ROI calculations. The same applies to interoperability across functions. For example, predictive models developed for finance may inform HR or marketing strategies, multiplying AI's value in ways traditional KPIs overlook. ... Emerging models, such as Gartner's multidimensional AI measurement frameworks, and India's evolving AI governance standards offer early guidance. But turning them into practice requires rigor - from assessing how data improvements accelerate downstream use cases to quantifying cross-team synergies, and even recognizing softer outcomes like trust and employee well-being. "AI is neither hype nor savior - it is a tool," Gupta said.


How a fake ICS network can reveal real cyberattacks

Most ICS honeypots today are low interaction, using software to simulate devices like programmable logic controllers (PLCs). These setups are useful for detecting basic threats but are easy for skilled attackers to identify. Once attackers realize they are interacting with a decoy, they stop revealing their tactics. ... ICSLure takes a different approach. It combines actual PLC hardware with realistic simulations of physical processes, such as the movement of machinery on a factory floor. This creates what the researchers call a very high interaction environment. For attackers, ICSLure feels like a live industrial network. For defenders, it provides more accurate data about how adversaries move inside an ICS environment and the techniques they use to disrupt operations. Angelo Furfaro, one of the researchers behind ICSLure, told Help Net Security that deploying this type of environment safely requires careful planning. “The honeypot infrastructure must be completely segregated from any production network through dedicated VLANs, firewalls, and demilitarized zones, ensuring that malicious activity cannot spill over into critical operations,” he said. “PLCs should only interact with simulated plants or digital twins, eliminating the possibility of executing harmful commands on physical processes.”


The Biggest Barriers Blocking Agentic AI Adoption

To achieve the critical mass of adoption needed to fuel mainstream adoption of AI agents, we have to be able to trust them. This is true on several levels; we have to trust them with the sensitive and personal data they need to make decisions on our behalf, and we have to trust that the technology works, our efforts aren’t hampered by specific AI flaws like hallucinations. And if we are trusting it to make serious decisions, such as buying decisions, we have to trust that it will make the right ones and not waste our money. ... Another problem is that agentic AI relies on the ability of agents to interact and operate with third-party systems, and many third-party systems aren’t set up to work with this yet. Computer-using agents (such as OpenAI Operator and Manus AI) circumvent this by using computer vision to understand what’s on a screen. This means they can use many websites and apps just like we can, whether or not they’re programmed to work with them. ... Finally, there are wider cultural concerns that go beyond technology. Some people are uncomfortable with the idea of letting AI make decisions for them, regardless of how routine or mundane those decisions may be. Others are nervous about the impact that AI will have on jobs, society or the planet. These are all totally valid and understandable concerns and can’t be dismissed as barriers to be overcome simply through top-down education and messaging.


The Legal Perils of Dark Patterns in India: Intersection between Data Privacy and Consumer Protection

Dark patterns are any deceptive design pattern using UI or UX that misleads or tricks users by subverting their autonomy and manipulating them into taking actions which otherwise they would not have taken. Coined by UX designer Harry Brignull, who registered a website called darkpatterns.org, which he intended to be designed like a library wherein all types of such UX/UI designs are showcased in public interest, hence the name “dark pattern” came into being. ... The CCPA can order for recall goods, withdraw services or even stop such services in instance it finds that an entity is engaging in dark pattern as per Section 20 of the CP Act, in instance of breach of guidelines. ... By their very design, some patterns harm the user in two ways: first, by manipulating them into choices they would not have otherwise made; and second, by compelling the collection or processing of personal data in ways that breach data protection requirements. In such cases, the entity is not only exploiting the individual but is also failing to meet its legal duties under the DPDPA thereby creating exposure under both the CP Act and the DPDPA. ... Under the DPDPA, the stakes are now significantly higher. The Data Protection Board of India has the authority to impose financial penalties of up to Rs 50 crores for not obtaining purposeful consent or for disregarding technical and organisational measures.


In Order to Scale AI with Confidence, Enterprise CTOs Must Unlock the Value of Unstructured Data

Over the past two years, we’ve witnessed rapid advancements in Large Language Models (LLMs). As these models become increasingly powerful–and more commoditized–the true competitive edge for enterprises will lie in how effectively they harness their internal data. Unstructured content forms the foundation of modern AI systems, making it essential for organizations to build strong unstructured data infrastructure to succeed in the AI-driven era. This is what we mean by an unstructured data foundation: the ability for companies to rapidly identify what unstructured data exists across the organization, assess its quality, sensitivity, and safety, enrich and contextualize it to improve AI performance, and ultimately create a governed system for generating and maintaining high-quality data products at scale. In 2025, unstructured data is as much about quality as it is about quantity. “Quality” in the context of unstructured data remains largely uncharted territory. Companies need clear frameworks to assess dimensions like relevance, freshness, and duplication. Over the past six years, the volume and variety of unstructured data–and the number of AI applications that generate or depend on it–have exploded. Many have called it the largest and most valuable source of data within an organization, and I’d agree–especially as AI becomes increasingly central to how enterprises operate. Here’s why.


Scaling Databases for Large Multi-Tenant Applications

Building and maintaining multi-tenant database applications is one of the more challenging aspects of being a developer, administrator or analyst. Until the debut of AI systems, with their power-hungry GPUs, database workloads represented the most expensive workloads because of their demands on memory, CPU and storage performance to work effectively. ... Sharding is a data management technique that effectively partitions data across multiple databases. At its center, you need something that I like to call a command and control database. Still, I've also seen it called a shard-map manager or a router database. This database contains the metadata around the shards and your environment, and routes application calls to the appropriate shard or database. ... If you are working on the Microsoft stack, I'm going to give a shout out to elastic database tools . This .NET library gives you all the tools like shard-map management, the ability to do data-dependent routing, and doing multi-shard queries as needed. Additionally, consider the ability to add and remove shards to match shifting demands. ... Some other tooling you need to think about in planning, are how to execute schema changes across your partitions. Database DevOps is a mature practice, but rolling out changes across is fleet of databases requires careful forethought and operations. 

Daily Tech Digest - September 23, 2023

A CISO’s First 90 Days: The Ultimate Action Plan and Advice

It’s a CISOs responsibility to establish a solid security foundation as rapidly as possible, and there are many mistakes that can be made along the way. This is why the first 90 days are the most important for new CISOs. Without a clear pathway to success in the early months, CISOs can lose confidence in their ability as change agents and put their entire organization at risk of data theft and financial loss. No pressure! Here’s our recommended roadmap for CISOs in the first 90 days of a new role. ... This means they can reduce the feeling of overwhelm and work strategically toward business goals. For a new CISO, it can be challenging trying to locate and classify all the sensitive data across an organization, not to mention ensuring that it’s also safe from a variety of threats. Data protection technology is often focused on perimeters and endpoints, giving internal bad actors the perfect opportunity to slip through any security gaps in files, folders, and devices. For large organizations, it’s practically impossible to audit data activity at scale without a robust DSPM.


There’s No Value in Observability Bloat. Let’s Focus on the Essentials

Telemetry data gathered from the distributed components of modern cloud architectures needs to be centralized and correlated for engineers to gain a complete picture of their environments. Engineers need a solution with critical capabilities such as dashboarding, querying and alerting, and AI-based analysis and response, and they need the operation and management of the solution to be streamlined. What’s important for them to know is that it’s not necessary to spend more to ensure peak performance and visibility as their environmental complexity grows. ... No doubt, more data is being generated, but most of it is not relevant or valuable to an organization. Observability can be optimized to bring greater value to customers, and that’s where the market is headed. Call it “essential observability.” It’s a disruptive vision to propose a re-architected approach to observability, but what engineers need is a new approach making it easier to surface insights from their telemetry data while deprioritizing low-value data. Costs can be reduced by consuming only the data that enables teams to maintain performance and drive smart business decisions.


Shedding Light on Dark Patterns in FinTech: Impact of DPDP Act

In practice, these patterns exploit human psychology and trick people into making unwanted choices/ purchases. It has become a menace for the FinTech industry. These patterns are used to encourage people to sign up for loans, credit cards, and other financial products that they may not need or understand. However, the new Digital Personal Data Protection Act, 2023 (“DPDP Act”), can be used to bring such dark patterns under control. The DPDP Act requires online platforms to seek consent of Data Principals through clear, specific and unambiguous notice before processing any data. Further, the Act empowers individuals to retract/ withdraw consent to any agreement at any juncture.  ... Companies will need to review their user interfaces and remove any dark patterns that they are using and protect the personal data and use the data for ‘legitimate purposes’ only and take consent from users, through clear affirmative action, in unambiguous terms. They will also need to develop new ways to promote their products and services without relying on deception.


Can business trust ChatGPT?

It might seem premature to worry about trust when there is already so much interest in the opportunities Gen AI can offer. However, it needs to be recognized that there’s also an opportunity cost — inaccuracy and misuse could be disastrous in ways organizations can’t easily anticipate. Up until now, digital technology has been traditionally viewed as being trustworthy in the sense that it is seen as being deterministic. Like an Excel formula, it will be executed in the same manner 100% percent of the time, leading to a predictable, consistent outcome. Even when the outcome yields an error — due to implementation issues, changes in the context in which it has been deployed, or even bugs and faults — there is nevertheless a sense that technology should work in a certain way. In the case of Gen AI, however, things are different; even the most optimistic hype acknowledges that it can be unpredictable, and its output is often unexpected. Trust in consistency seems to be less important than excitement at the sheer range of possibilities Gen AI can deliver, seemingly in an instant.


A Few Best Practices for Design and Implementation of Microservices

The first step is to define the microservices architecture. It has to be established how the services will interact with each other before a company attempts to optimise their implementation. Once microservices architecture gets going, we must be able to optimise the increase in speed. It is better to start with a few coarse-grained but self-contained services. Fine graining can happen as the implementation matures over time. The developers, operations team, and testing fraternity may have extensive experience in monoliths, but a microservices-based system is a new reality; hence, they need time to cope with this new shift. Do not discard the monolithic application immediately. Instead, have it co-exist with the new microservices, and iteratively deprecate similar functionalities in the monolithic application. This is not easy and requires a significant investment in people and processes to get started. As with any technology, it is always better to avoid the big bang approach, and identify ways to get the toes wet before diving in head first.


Bridging Silos and Overcoming Collaboration Antipatterns in Multidisciplinary Organisations

Collaboration is at the heart of teamwork. Many modern organisations set up teams to be cross-functional or multidisciplinary. Multidisciplinary teams are made up of specialists from different disciples collaborating together daily towards a shared outcome. They have the roles needed to design, plan, deliver, deploy and iterate a product or service. Modern approaches and frameworks often focus on increasing flow and reducing blockers, and one way to do this is to remove the barrier between functions. However, as organisations grow in size and complexity, they look for different ways of working together, and some of these create collaboration anti-patterns. Three of the most common antipatterns I see and have named here are: One person split across multiple teams; Product vs. engineering wars; and X-led organisations,


The Rise of the Malicious App

Threat actors have changed the playing field with the introduction of malicious apps. These applications add nothing of value to the hub app. They are designed to connect to a SaaS application and perform unauthorized activities with the data contained within. When these apps connect to the core SaaS stack, they request certain scopes and permissions. These permissions then allow the app the ability to read, update, create, and delete content. Malicious applications may be new to the SaaS world, but it's something we've already seen in mobile. Threat actors would create a simple flashlight app, for example, that could be downloaded through the app store. Once downloaded, these minimalistic apps would ask for absurd permission sets and then data-mine the phone. ... Threat actors are using sophisticated phishing attacks to connect malicious applications to core SaaS applications. In some instances, employees are led to a legitimate-looking site, where they have the opportunity to connect an app to their SaaS. In other instances, a typo or slightly misspelled brand name could land an employee on a malicious application's site. 


What Is GreenOps? Putting a Sustainable Focus on FinOps

If the future of cloud sustainability appears bleak, Arora advises looking to examples of other tech advancements and the curve of their development, where early adopters led the way and then the main curve eventually followed. “The same thing happened with electric cars,” Arora points out. “They didn’t enter the mainstream because they were better for the environment; they entered the mainstream because the cost came down.” And this is what he predicts will happen with cloud sustainability. Right now, the early adopters are stepping forward and championing GreenOps as a part of the FinOps equation. In a few years, others will be able to measure their data, analyze how they reduced their carbon impact and what effect it had on cloud spending and savings, and then follow their lead. It’s naive to think that most companies will go out of their way (and perhaps even increase their cloud spending) to reduce their carbon footprint. 


The Growing Importance of AI Governance

As AI systems become more powerful and complex, businesses and regulatory agencies face two formidable obstacles:The complexity of the systems requires rule-making by technologists rather than politicians, bureaucrats, and judges. The thorniest issues in AI governance involve value-based decisions rather than purely technical ones. An approach based on regulatory markets has been proposed that attempts to bridge the divide between government regulators who lack the required technical acumen and technologists in the private sector whose actions may be undemocratic. The technique adopts an outcome-based approach to regulation in place of the traditional reliance on prescriptive command-and-control rules. AI governance under this model would rely on licensed private regulators charged with ensuring AI systems comply with outcomes specified by governments, such as preventing fraudulent transactions and blocking illegal content. The private regulators would also be responsible for the safe use of autonomous vehicles, use of unbiased hiring practices, and identification of organizations that fail to comply with the outcome-based regulations.


Legal Issues for Data Professionals

Lawyers identify risks data professionals may not know they have. Moreover, because data is a new field of law, lawyers need to be innovative in creating legal structures in contracts to allow two or more parties to achieve their goals. For example, there are significant challenges attempting to apply the legal techniques traditionally used with other classes of business assets (such as intellectual property, real property, and corporate physical assets) to data as a business asset class. Because the old legal techniques do not fit well, lawyers and their clients need to develop new ways of handling the business and legal issues that arise, and in so doing, invent new legal structures that meet the specific attributes of data that differentiate data from other business assets. To take one example, using software agreements as a template for data transactions will not always work because the IP rights for software do not align with data, the concept of software deliverables and acceptance testing is not a good fit, and the representations and warranties are both over and underinclusive. 



Quote for the day:

"Rarely have I seen a situation where doing less than the other guy is a good strategy." -- Jimmy Spithill

Daily Tech Digest - January 11, 2023

WSL stands for writing as a second language. ... Whatever the intention, WSL leads to an overall tone that adds distance between the writer and the reader. And that is precisely the opposite of what is needed now from leaders. If there are fewer opportunities to hear leaders speak in person because so many of us are working from home, then we need to “hear” them speak in their emails. A more conversational writing tone shortens the distance between author and audience. It feels more real, which is what everyone craves at a time when we are living more of our lives online. To guard against WSL, just apply this simple test when reviewing what you’ve written: Does this sound like me? Would I talk like this if I were speaking face-to-face with a colleague? Reading aloud is a good way to check for the WSL problem (especially if, as a leader, someone else is writing the words for you). ... “Expert-itis” happens when people get too close to their subject. They assume everyone else knows as much as they do, so they focus on the nuances of a particular topic or insight without explaining the context.


Attackers Are Already Exploiting ChatGPT to Write Malicious Code

Sergey Shykevich reiterates that with ChatGPT, a malicious actor needs to have no coding experience to write malware: "You should just know what functionality the malware — or any program — should have. ChatGTP will write the code for you that will execute the required functionality." Thus, "the short-term concern is definitely about ChatGPT allowing low-skilled cybercriminals to develop malware," Shykevich says. "In the longer term, I assume that also more sophisticated cybercriminals will adopt ChatGPT to improve the efficiency of their activity, or to address different gaps they may have." From an attacker’s perspective, code-generating AI systems allow malicious actors to easily bridge any skills gap they might have by serving as a sort of translator between languages, added Brad Hong, customer success manager at Horizon3ai. Such tools provide an on-demand means of creating templates of code relevant to an attacker's objectives and cuts down on the need for them to search through developer sites such as Stack Overflow and Git, Hong said in an emailed statement to Dark Reading.


Cybersecurity staff are struggling. Here's how to support them better

Cybersecurity professionals are at breaking point, with many fearing they will soon lose their jobs because of a cyberattack and others struggling to cope with the growing strain. Unless businesses act soon, an ever-growing skills gap might become an unbridgeable chasm. ... "Cyber used to be very much off in a darkened room," she says. "And don't get me wrong, there's loads of stuff relating to IT security that people in security still have to do. But you need to be thinking about cyber at the heart of every business process and everything that you do within an organization." And cyber isn't a one-way street -- as well as ensuring the people in security feel part of the broader enterprise, Heneghan says line-of-business professionals must also learn about cyber concerns themselves. Success requires a joined-up approach, where business and security come together and recognize how information integrity isn't just one team's -- or even one person's -- responsibility. "It's about building the fundamental foundation," she says. "It's not acceptable for anyone in an organization not to understand the exposure and the risks around security anymore."


FTC Is Escalating Scrutiny of Dark Patterns, Children’s Privacy

The FTC has publicly identified dark patterns as an enforcement priority. In September 2022, the FTC released a report summarizing concerns that companies are increasingly using sophisticated design practices, known as dark patterns, to trick or manipulate consumers into buying products or services or provide their personal data. The report reflects the FTC’s findings that dark patterns are used in a variety of industries and contexts, including e-commerce, cookie consent banners, children’s apps, and subscription sales. Unlike neutral interfaces, dark patterns often take advantage of consumers’ cognitive biases to steer their conduct or delay access to information needed to make fully informed decisions. The FTC’s research noted that dark patterns are highly effective at influencing consumer behavior. Dark patterns include disguising ads to look like independent content, making it difficult for consumers to cancel subscriptions or charges, burying key terms or junk fees, and tricking consumers into sharing their data. Because dark patterns are covert or otherwise deceptive, many consumers don’t realize they are being manipulated or misled.


8 top priorities for CIOs in 2023

Over the past decade, enterprises have rapidly added powerful technology and cloud-based services to their portfolios. At the same time, they have been much less likely to retire the legacy systems these new tools were meant to replace, creating a complex web of redundant applications and systems, warns VMware CIO Jason Conyard. There’s an industry-wide push to reduce technical and data debt and reallocate those resources toward building the future, Conyard says. “CIOs will be looking to rationalize their technology estate to reduce unnecessary cost and maintenance, and to minimize their security attack surface and privacy exposure.” ... There must be open, transparent, and collaborative working sessions to create alignment on how technology capabilities can be deployed to meet enterprise goals, states Bill Cassidy, CIO at New York Life Insurance. “All participants need to demonstrate strong communication skills, including effective listening, to properly weigh the pros, cons, and tradeoffs of one path of execution versus another,” he adds. ... Organizations that can successfully act on their data insights will thrive, says Dan Krantz, CIO of electronics test and measurement equipment manufacturer Keysight Technologies. 


Learning From Other People’s Mistakes

One prerequisite to this consolidation of wisdom is the need for information sharing. Information about what works and what does not work is needed to enact controls in an environment that help prevent certain events from happening twice. This can be accomplished in several ways. Using organizations such as ISACA® to stay connected to peers working at other enterprises helps professionals converse about relevant topics. But information sharing goes beyond merely discussing what you are working on and how you are solving control problems. There is also a need to discuss what went wrong. This means sharing information about what failed and why. This is hard for several reasons, not the least of which is that it is embarrassing to admit to failure. However, there can also be legal impacts of admitting that something went wrong and that as a result services, people’s data, or even their lives were endangered. ... In short, not all cyberincidents can be attributed to sophisticated nation-state hackers leveraging advanced persistent threats (APTs), phrases such as “we are taking it seriously” notwithstanding.


Developer experience will take center stage in 2023

In order for software companies to win and retain top developer talent, they must be able to provide a great developer experience. To do that, tech leaders must prioritize minimizing toil and frustration in the software development process. Software development is a highly creative process, but is often rampant with bottlenecks and inefficiencies that disrupt creative flow. By minimizing bottlenecks like idle time waiting for build and test feedback cycles to complete and inefficient troubleshooting, software development teams will improve productivity while increasing developer happiness. Especially given the uncertain economic outlook, now is the time for companies to focus on solidifying their software development team and upgrading their talent pool. As a result, there will be a greater emphasis on tools that boost productivity so developers can spend more time innovating and creating useful code. This is the best way to attract and retain top talent. When you ask many software development leaders what their average feedback cycle time is, they usually don’t have an answer. 


What Are the Advantages of Quantum Computing?

At their core, quantum computers manipulate subatomic particles, making them ideal for atomic and molecular scale research and development. “It can help us solve physics problems where quantum machines and the interrelation of materials or properties are important,” Mark Potter, SVP and CTO of Hewlett Packard Enterprise and director of Hewlett Packard Labs, explained in an interview with ITPro in late 2019. “At an atomic level, quantum computing simulates nature and therefore could help us find new materials or identify new chemical compounds for drug discovery.” Quantum technology is also having an out-sized impact on logistics management and route planning. For example, grocery chain Save-On-Foods is using quantum computing to optimize their logistics to become more efficient, save money, and bring fresh food to their customers. Specifically, they were able to reduce the computation time of an optimization task down from 25 hours to only 2 minutes. Another major area of interest is quantum cryptography, which, depending who you ask, is either a major advantage or a cause for concern. 



CISOs Mark Data Proliferation as Growing Security Problem

Claude Mandy, chief evangelist of data security at Symmetry Systems, says data sprawl is a headache for security teams because they have historically designed their security to protect the systems and networks that data is stored or transmitted on, but not the data. “As data proliferates outside of these secured environments, they have realized their security is no longer adequate,” he says. “This is particularly concerning when the traditional perimeter that provided some comfort has all but disappeared as organizations have moved to the cloud.” ... In the new era of data security, CISOs must have the ability to learn where sensitive data is anywhere in the cloud environment, who can access these data, and their security posture and deploy these solutions. “Traditionally, data security has been the ultimate goal of infosec organizations,” says Ravi Ithal, Normalyze CTO and cofounder. “As the volume of data increases and the number of places where data exists increases -- data proliferation -- the number of ways in which it can be accessed and misused also increases. 


4 key shifts in the breach and attack simulation (BAS) market

First, they require up-front configuration for their on-site deployments, which may also require customizations to ensure everything works properly with the integrations. Additionally, BAS solutions need to be proactively maintained, and for enterprise environments this often requires dedicated staff. As a result, we’ll see BAS vendors work harder to streamline their product deployments to help reduce the overhead cost for their customers through methods such as providing more SaaS-based offerings. Many BAS tools are designed to conduct automated security control validation. Most have an extensive library of automation modules that can simulate specific threats and malicious behaviors on endpoints, networks, or cloud platforms. BAS vendors tend to compete in the market this way. However, many vendors don’t offer the ability to create or customize modules in a meaningful way. For example, some don’t provide the user with a way to chain attack procedures together, which can be essential when trying to simulate an emerging threat that uses common tactics, techniques, and procedures



Quote for the day:

"A leader is someone people respond to, trust and want to work with." -- @ShawnUpchurch