Daily Tech Digest - September 23, 2023

A CISO’s First 90 Days: The Ultimate Action Plan and Advice

It’s a CISOs responsibility to establish a solid security foundation as rapidly as possible, and there are many mistakes that can be made along the way. This is why the first 90 days are the most important for new CISOs. Without a clear pathway to success in the early months, CISOs can lose confidence in their ability as change agents and put their entire organization at risk of data theft and financial loss. No pressure! Here’s our recommended roadmap for CISOs in the first 90 days of a new role. ... This means they can reduce the feeling of overwhelm and work strategically toward business goals. For a new CISO, it can be challenging trying to locate and classify all the sensitive data across an organization, not to mention ensuring that it’s also safe from a variety of threats. Data protection technology is often focused on perimeters and endpoints, giving internal bad actors the perfect opportunity to slip through any security gaps in files, folders, and devices. For large organizations, it’s practically impossible to audit data activity at scale without a robust DSPM.


There’s No Value in Observability Bloat. Let’s Focus on the Essentials

Telemetry data gathered from the distributed components of modern cloud architectures needs to be centralized and correlated for engineers to gain a complete picture of their environments. Engineers need a solution with critical capabilities such as dashboarding, querying and alerting, and AI-based analysis and response, and they need the operation and management of the solution to be streamlined. What’s important for them to know is that it’s not necessary to spend more to ensure peak performance and visibility as their environmental complexity grows. ... No doubt, more data is being generated, but most of it is not relevant or valuable to an organization. Observability can be optimized to bring greater value to customers, and that’s where the market is headed. Call it “essential observability.” It’s a disruptive vision to propose a re-architected approach to observability, but what engineers need is a new approach making it easier to surface insights from their telemetry data while deprioritizing low-value data. Costs can be reduced by consuming only the data that enables teams to maintain performance and drive smart business decisions.


Shedding Light on Dark Patterns in FinTech: Impact of DPDP Act

In practice, these patterns exploit human psychology and trick people into making unwanted choices/ purchases. It has become a menace for the FinTech industry. These patterns are used to encourage people to sign up for loans, credit cards, and other financial products that they may not need or understand. However, the new Digital Personal Data Protection Act, 2023 (“DPDP Act”), can be used to bring such dark patterns under control. The DPDP Act requires online platforms to seek consent of Data Principals through clear, specific and unambiguous notice before processing any data. Further, the Act empowers individuals to retract/ withdraw consent to any agreement at any juncture.  ... Companies will need to review their user interfaces and remove any dark patterns that they are using and protect the personal data and use the data for ‘legitimate purposes’ only and take consent from users, through clear affirmative action, in unambiguous terms. They will also need to develop new ways to promote their products and services without relying on deception.


Can business trust ChatGPT?

It might seem premature to worry about trust when there is already so much interest in the opportunities Gen AI can offer. However, it needs to be recognized that there’s also an opportunity cost — inaccuracy and misuse could be disastrous in ways organizations can’t easily anticipate. Up until now, digital technology has been traditionally viewed as being trustworthy in the sense that it is seen as being deterministic. Like an Excel formula, it will be executed in the same manner 100% percent of the time, leading to a predictable, consistent outcome. Even when the outcome yields an error — due to implementation issues, changes in the context in which it has been deployed, or even bugs and faults — there is nevertheless a sense that technology should work in a certain way. In the case of Gen AI, however, things are different; even the most optimistic hype acknowledges that it can be unpredictable, and its output is often unexpected. Trust in consistency seems to be less important than excitement at the sheer range of possibilities Gen AI can deliver, seemingly in an instant.


A Few Best Practices for Design and Implementation of Microservices

The first step is to define the microservices architecture. It has to be established how the services will interact with each other before a company attempts to optimise their implementation. Once microservices architecture gets going, we must be able to optimise the increase in speed. It is better to start with a few coarse-grained but self-contained services. Fine graining can happen as the implementation matures over time. The developers, operations team, and testing fraternity may have extensive experience in monoliths, but a microservices-based system is a new reality; hence, they need time to cope with this new shift. Do not discard the monolithic application immediately. Instead, have it co-exist with the new microservices, and iteratively deprecate similar functionalities in the monolithic application. This is not easy and requires a significant investment in people and processes to get started. As with any technology, it is always better to avoid the big bang approach, and identify ways to get the toes wet before diving in head first.


Bridging Silos and Overcoming Collaboration Antipatterns in Multidisciplinary Organisations

Collaboration is at the heart of teamwork. Many modern organisations set up teams to be cross-functional or multidisciplinary. Multidisciplinary teams are made up of specialists from different disciples collaborating together daily towards a shared outcome. They have the roles needed to design, plan, deliver, deploy and iterate a product or service. Modern approaches and frameworks often focus on increasing flow and reducing blockers, and one way to do this is to remove the barrier between functions. However, as organisations grow in size and complexity, they look for different ways of working together, and some of these create collaboration anti-patterns. Three of the most common antipatterns I see and have named here are: One person split across multiple teams; Product vs. engineering wars; and X-led organisations,


The Rise of the Malicious App

Threat actors have changed the playing field with the introduction of malicious apps. These applications add nothing of value to the hub app. They are designed to connect to a SaaS application and perform unauthorized activities with the data contained within. When these apps connect to the core SaaS stack, they request certain scopes and permissions. These permissions then allow the app the ability to read, update, create, and delete content. Malicious applications may be new to the SaaS world, but it's something we've already seen in mobile. Threat actors would create a simple flashlight app, for example, that could be downloaded through the app store. Once downloaded, these minimalistic apps would ask for absurd permission sets and then data-mine the phone. ... Threat actors are using sophisticated phishing attacks to connect malicious applications to core SaaS applications. In some instances, employees are led to a legitimate-looking site, where they have the opportunity to connect an app to their SaaS. In other instances, a typo or slightly misspelled brand name could land an employee on a malicious application's site. 


What Is GreenOps? Putting a Sustainable Focus on FinOps

If the future of cloud sustainability appears bleak, Arora advises looking to examples of other tech advancements and the curve of their development, where early adopters led the way and then the main curve eventually followed. “The same thing happened with electric cars,” Arora points out. “They didn’t enter the mainstream because they were better for the environment; they entered the mainstream because the cost came down.” And this is what he predicts will happen with cloud sustainability. Right now, the early adopters are stepping forward and championing GreenOps as a part of the FinOps equation. In a few years, others will be able to measure their data, analyze how they reduced their carbon impact and what effect it had on cloud spending and savings, and then follow their lead. It’s naive to think that most companies will go out of their way (and perhaps even increase their cloud spending) to reduce their carbon footprint. 


The Growing Importance of AI Governance

As AI systems become more powerful and complex, businesses and regulatory agencies face two formidable obstacles:The complexity of the systems requires rule-making by technologists rather than politicians, bureaucrats, and judges. The thorniest issues in AI governance involve value-based decisions rather than purely technical ones. An approach based on regulatory markets has been proposed that attempts to bridge the divide between government regulators who lack the required technical acumen and technologists in the private sector whose actions may be undemocratic. The technique adopts an outcome-based approach to regulation in place of the traditional reliance on prescriptive command-and-control rules. AI governance under this model would rely on licensed private regulators charged with ensuring AI systems comply with outcomes specified by governments, such as preventing fraudulent transactions and blocking illegal content. The private regulators would also be responsible for the safe use of autonomous vehicles, use of unbiased hiring practices, and identification of organizations that fail to comply with the outcome-based regulations.


Legal Issues for Data Professionals

Lawyers identify risks data professionals may not know they have. Moreover, because data is a new field of law, lawyers need to be innovative in creating legal structures in contracts to allow two or more parties to achieve their goals. For example, there are significant challenges attempting to apply the legal techniques traditionally used with other classes of business assets (such as intellectual property, real property, and corporate physical assets) to data as a business asset class. Because the old legal techniques do not fit well, lawyers and their clients need to develop new ways of handling the business and legal issues that arise, and in so doing, invent new legal structures that meet the specific attributes of data that differentiate data from other business assets. To take one example, using software agreements as a template for data transactions will not always work because the IP rights for software do not align with data, the concept of software deliverables and acceptance testing is not a good fit, and the representations and warranties are both over and underinclusive. 



Quote for the day:

"Rarely have I seen a situation where doing less than the other guy is a good strategy." -- Jimmy Spithill

Daily Tech Digest - September 22, 2023

HR Leaders’ strategies for elevating employee engagement in global organisations

In the age of AI, HR technologies have emerged as powerful tools for enhancing employee engagement by streamlining HR processes, improving communication, and personalising the employee experience. Sreedhara added “By embracing HR Tech, we can enhance the employee experience by reducing administrative burdens, improving access to information, and enabling employees to focus on more meaningful aspects of their work. Moreover, these technologies can contribute to greater employee engagement. Enhancing employee experience via HR tech and tools can improve efficiency, and empower employees to take more control of their work-related tasks. We have also enabled some self-service technologies like: Employee portal that serves all HR-related tasks, and access to policies and processes across the employee life cycle - Onboarding, performance management, benefits enrolment, and expense management;  Employee feedback and surveys; Databank for predictive analysis (early warning systems) and manage employee engagement.”


Bolstering enterprise LLMs with machine learning operations foundations

Risk mitigation is paramount throughout the entire lifecycle of the model. Observability, logging, and tracing are core components of MLOps processes, which help monitor models for accuracy, performance, data quality, and drift after their release. This is critical for LLMs too, but there are additional infrastructure layers to consider. LLMs can “hallucinate,” where they occasionally output false knowledge. Organizations need proper guardrails—controls that enforce a specific format or policy—to ensure LLMs in production return acceptable responses. Traditional ML models rely on quantitative, statistical approaches to apply root cause analyses to model inaccuracy and drift in production. With LLMs, this is more subjective: it may involve running a qualitative scoring of the LLM’s outputs, then running it against an API with pre-set guardrails to ensure an acceptable answer. Governance of enterprise LLMs will be both an art and science, and many organizations are still understanding how to codify them into actionable risk thresholds. 


Reimagining Application Development with AI: A New Paradigm

AI-assisted pair programming is a collaborative coding approach where an AI system — like GitHub Copilot or TestPilot — assists developers during coding. It’s an increasingly common approach that significantly impacts developer productivity. In fact, GitHub Copilot is now behind an average of 46 percent of developers’ code and users are seeing 55 percent faster task completion on average. For new software developers, or those interested in learning new skills, AI-assisted pair programming are training wheels for coding. With the benefits of code snippet suggestions, developers can avoid struggling with beginner pitfalls like language syntax. Tools like ChatGPT can act as a personal, on-demand tutor — answering questions, generating code samples, and explaining complex code syntax and logic. These tools dramatically speed the learning process and help developers gain confidence in their coding abilities. Building applications with AI tools hastens development and provides more robust code. 


Don't Let AI Frenzy Lead to Overlooking Security Risks

"Everybody is talking about prompt injection or backporting models because it is so cool and hot. But most people are still struggling with the basics when it comes to security, and these basics continue to be wrong," said John Stone - whose title at Google Cloud is "chaos coordinator" - while speaking at Information Security Media Group's London Cybersecurity Summit. Successful AI implementation requires a secure foundation, meaning that firms should focus on remediating vulnerabilities in the supply chain, source code, and larger IT infrastructure, Stone said. "There are always new things to think about. But the older security risks are still going to happen. You still have infrastructure. You still have your software supply chain and source code to think about." Andy Chakraborty, head of technology platforms at Santander U.K., told the audience that highly regulated sectors such as banking and finance must especially exercise caution when deploying AI solutions that are trained on public data sets.


The second coming of Microsoft's do-it-all laptop is more functional than ever

Microsoft's Surface Laptop Studio 2 is really unlike any other laptop on the market right now. The screen is held up by a tiltable hinge that lets it switch from what I'll call "regular laptop mode" to stage mode (the display is angled like the image above) to studio mode (the display is laid flat, screen-side up, like a tablet). The closest thing I can think of is, well, the previous Laptop Studio model, which fields the same shape-shifting form factor. But after today, if you're the customer for Microsoft's screen-tilting Surface device, then your eyes will be all over the latest model, not the old. That's a good deal, because, unlike the predecessor, the new Surface Laptop Studio 2 features an improved 13th Gen Intel Core H-class processor, NVIDIA's latest RTX 4050/4060 GPUs, and an Intel NPU on Windows for video calling optimizations (which never hurts to have). Every Microsoft expert on the demo floor made it clear to me that gaming and content creation workflows are still the focus of the Studio laptop, so the changes under the hood make sense.


Why more security doesn’t mean more effective compliance

Worse, the more tools there are to manage, the harder it might be to prove compliance with an evolving patchwork of global cybersecurity rules and regulations. That’s especially true of legislation like DORA, which focuses less on prescriptive technology controls and more on providing evidence of why policies were put in place, how they’re evolving, and how organizations can prove they’re delivering the intended outcomes. In fact, it explicitly states that security and IT tools must be continuously monitored and controlled to minimize risk. This is a challenge when organizations rely on manual evidence gathering. Panaseer research reveals that while 82% are confident they’re able to meet compliance deadlines, 49% mostly or solely rely on manual, point-in-time audits. This simply isn’t sustainable for IT teams, given the number of security controls they must manage, the volume of data they generate, and continuous, risk-based compliance requirements. They need a more automated way to continuously measure and evidence KPIs and metrics across all security controls.


EU Chips Act comes into force to ensure supply chain resilience

“With the entry into force today of the European Chips Act, Europe takes a decisive step forward in determining its own destiny. Investment is already happening, coupled with considerable public funding and a robust regulatory framework,” said Thierry Breton, commissioner for Internal Market, in comments posted alongside the announcement. “We are becoming an industrial powerhouse in the markets of the future — capable of supplying ourselves and the world with both mature and advanced semiconductors. Semiconductors that are essential building blocks of the technologies that will shape our future, our industry, and our defense base,” he said. The European Union’s Chips Act is not the only government-backed plan aimed at shoring up domestic chip manufacturing in the wake of the supply chain crisis that has plagued the semiconductor industry in recent years. In the past year, the US, UK, Chinese, Taiwanese, South Korean, and Japanese governments have all announced similar plans.


Microsoft Copilot Brings AI to Windows 11, Works Across Multiple Apps and Your Phone

With Copilot, it's possible to ask the AI to write a summary of a book in the middle of a Word document, or to select an image and have the AI remove the background. In one example, Microsoft showed a long email and demonstrated that when you highlight the text, Copilot appears so you can ask it questions related to the email. And that information can be cross-referenced to information found online, such as asking Copilot for lunch spots nearby based on the email's content. Copilot will be available on the Windows 11 desktop taskbar, making it instantly available at one click. Microsoft says that whether you're using Word, PowerPoint or Edge, you can call on Copilot to assist you with various tasks. It can also be called on via voice. Copilot can connect to your phone, so, for example, you can ask it when your next flight is and it'll look through your text messages and find the necessary information. Edge, Microsoft's web browser, will also have Copilot integrations. 


What Are the Biggest Lessons from the MGM Ransomware Attack?

Ransomware groups increasingly focus on branding and reputation, according to Ferhat Dikbiyik, head of research at third-party risk management software company Black Kite. “When ransomware first made its appearance, the attacks were relatively unsophisticated. Over the years, we have observed a marked elevation in their capabilities and tactics,” he tells InformationWeek in a phone interview. ... The group also called out: “The rumors about teenagers from the US and UK breaking into this organization are still just that -- rumors. We are waiting for these ostensibly respected cybersecurity firms who continue to make this claim to start providing solid evidence to support it.” Dikbiyik also notes that ransomware groups’ more nuanced selection of targets is an indication of increased professionalism. “These groups are doing their homework. They have resources. They acquire intelligence tools…they try to learn their targets,” he says. While ransomware is lucrative, money isn’t the only goal. Selecting high-profile targets, such as MGM, helps these groups to build a reputation, according to Dikbiyik.


A Dimensional Modeling Primer with Mark Peco

“Dimensional models are made up of two elements: facts and dimensions,” he explained. “A fact quantifies a property (e.g., a process cost or efficiency score) and is a measurement that can be captured at a point in time. It’s essentially just a number. A dimension provides the context for that number (e.g., when it was measured, who was the customer, what was the product).” It’s through combining facts and dimensions that we create information that can be used to answer business questions, especially those that relate to process improvement or business performance, Peco said. Peco went on to say that one of the biggest challenges he sees with companies using dimensional models is with integrating the potentially huge number of models into one coherent picture of the business. “A company has many, many processes,” he said, “and each requires its own dimensional model, so there has to be some way of joining these models together to give a complete picture of the organization.” 



Quote for the day:

"Things work out best for those who make the best of how things work out." -- John Wooden

Daily Tech Digest - September 21, 2023

6 deadly sins of enterprise architecture

The simplest way to build out enterprise software is to leverage the power of various tools, portals, and platforms constructed by outsiders. Often 90%+ of the work can be done by signing a purchase order and writing a bit of glue code. But trusting the key parts of an enterprise to an outside company has plenty of risks. Maybe some private equity firm buys the outside firm, fires all the good workers, and then jacks up the price knowing you can’t escape. Suddenly instantiating all your eggs in one platform starts to fail badly. No one remembers the simplicity and consistency that came from a single interface from a single platform. Spreading out and embracing multiple platforms, though, can be just as painful. The sales team may promise that the tools are designed to interoperate and speak industry standard protocols, but that gets you only halfway there. Each may store the data in an SQL database, but some use MySQL, others use PostgreSQL and others use Oracle. There’s no simple answer. Too many platforms creates a Tower of Babel. Too few brings the risk of vendor lock-in and all the pain of opening that email with the renewal contract. 


Manufacturing firms make early bets on the industrial metaverse

The building blocks of the industrial metaverse are “frequently proprietary, siloed and standalone,” according to a recent report by Miller and Forrester colleagues. Digital twins — which might use IoT sensor data and 3D modelling to provide a real-time picture of a piece of equipment or factory, for example — are perhaps closest to realization, but are still limited in some senses. “The reality today is that most digital twins are still asset- and vendor-specific,” Miller told Computerworld, with the same manufacturer responsible for both hardware and software. For example, an ABB robot may be sold with an ABB digital twin, or a Siemens motor will come with a Siemens digital twin — but getting them to work together can be a challenge. While these types of tools offer clear benefits for customers, firms that own multiple products from multiple vendors will eventually want “one digital twin of how the factory or the line is operating, not 100 digital twins of the different components,” said Miller. Even the most advanced precursor technologies, such as factory-spanning digital twins, tend to be the product of a partnership with one vendor.


How businesses can vet their cybersecurity vendors

Companies can’t assume that the vendor is telling the truth. Particularly in the authentication market, where there is currently no standardised testing to confirm solutions pass metrics such as ‘phishing resistance’. When talking to a vendor, whilst it may seem simple, the organisation should first ask the vendor: How does the tool prevent social engineering and AiTM attacks? Whilst some solutions might say passwordless or ‘phishing-resistant’, they could instead simply hide the password so that authentication is more convenient, but the vulnerability remains. The team needs to determine if the solution eliminates passwords from both the authentication flow and account recovery flow, should the user lose their typical login device. And the tool must implement “verifier impersonation protection” to thwart AiTM/proxy-based attacks. Getting the security team to conduct their research beforehand enables them to come prepared to ask detailed questions and can help bypass the buzzwords that vendors use to uncover the truth. To go a step further, vetting the vendor can allow security teams to learn more about the tool and uncover the truth.


Hidden dangers loom for subsea cables, the invisible infrastructure of the internet

Subsea cables can fall under a wide range of regulatory regimes, laws and authorities. At national level, there may be several authorities involved in their protection, including national telecom authorities, authorities under the NIS Directive, cybersecurity agencies, national coastguard, military, etc. There are also international treaties in place to be considered, establishing universal norms and the legal boundaries of the sea. ... Challenges for subsea cable resilience: Accidental, unintentional damage through fishing or anchoring has so far been the cause of most subsea cable incidents; Natural phenomena such as undersea earthquakes or landslides can have a significant impact, especially in places where there is a high concentration of cables; Chokepoints, where many cables are installed close to each other, are single points of failure, where one physical attack could strain the cable repair capacity; Physical attacks and cyberattacks should be considered as threats for the subsea cables, the landing points, and the ICT at the landing points.


Datacentre operators ‘hesitant’ over how to proceed with server farm builds as AI hype builds

“The developments in generative AI and the increasing use of a wide range of AI-based applications in datacentres, edge infrastructure and endpoint devices require the deployment of high-performance graphics processing units and optimised semiconductor devices,” said Alan Priestley, vice-president analyst at Gartner. “This is driving the production and deployment of AI chips.” And while Gartner’s figures suggest the AI trend is going to continue to take the world of tech by storm, the market watcher’s recently published Hype Cycle for emerging technologies lists generative AI as being at the “peak of inflated expectations”, which might go some way to explaining why operators are reluctant to rush to kit out their sites to accommodate this trend. For colocation operators that are targeting hyperscale cloud firms, many of which regularly talk up the potential for generative AI to transform how enterprises operate, there is perhaps less reticence, said Onnec’s Linqvist.


Developers: Is Your API Designed for Attackers?

The security firm analyzed 40 public breaches to see what role APIs played in security problems, which Snyder featured in his 2023 Black Hat conference presentation. The issue might be built-in vulnerabilities, misconfigurations in the API, or even a logical flaw in the application itself — and that means it falls on developers to fix it, Snyder said. “It’s a range of things, but it is generally with their own APIs,” Snyder told The New Stack. ”It is in their domain of influence, and honestly, their domain of control, because it is ultimately down to them to build a secure API.” The number of breaches analyzed is small — it was limited to publicly disclosed breaches — but Snyder said the problem is potentially much more pervasive. ... In the last couple of months, he said, security researchers who work on this space have uncovered billions of records that could have been breached through poor API design. He pointed to the API design flaws in basically every full-service carrier’s frequent flyer program, which could have exposed entire datasets or allowed for the awarding of unlimited miles and hotel points.


Rethinking Cybersecurity: The Power of the Hacker Mindset

Embracing a hacker mindset involves adopting an external viewpoint of your business to uncover vulnerabilities before they’re exploited. This includes embracing practices like ethical hacking and penetration testing. While forming a specialised ethical hacking team is an option, embedding this mindset within cyber teams and your wider business is equally effective. Key to this transformation is upskilling. Businesses should be offering training to encourage creative thinking when it comes to cybersecurity. Instead of waiting for breaches to learn from mistakes, being proactive is crucial. Regular, monthly upskilling for cybersecurity and IT teams, rather than every six months or even a year, keeps them on the front foot. Encouraging a hacking mindset also shouldn’t be confined to cyber experts; all employees should undergo cyber awareness training. In this fight, businesses and individuals aren’t alone. Numerous training platforms are available, but choosing those that concentrate on providing practical, hands-on skills rooted in real-world attack scenarios is essential. 


How to get started with prompt engineering

Joseph Reeve leads led a team of people working on features that require prompt engineering at Amplitude, a product analytics software provider. He has also built internal tooling to make it easier to work with LLMs. That makes him a seasoned professional in this emerging space. As he notes, "the great thing about LLMs is that there’s basically no hurdle to getting started—as long as you can type!" If you want to assess someone's prompt engineering advice, it's easy to test-drive their queries in your LLM of choice. Likewise, if you're offering prompt engineering services, you can be sure your employers or clients will be using an LLM to check your results. So the question of how you can learn about prompt engineering—and market yourself as a prompt engineer—doesn't have a simple, set answer, at least not yet. "We're definitely in the 'wild west' period," says AIPRM's King. "Prompt engineering means a lot of things to different people. To some it's just writing prompts. To others it's fine-tuning and configuring LLMs and writing prompts.


Australia’s new cybersecurity strategy: Build “cyber shields” around the country

The first shield proposes a long-term education of citizens and businesses so by 2030 they understand cyberthreats and how to protect themselves. This "shield" comes with a plan B that plans for citizens and businesses to have proper supports in place so that when they are the victim of cyber-attack, they're able to get back up off the mat very quickly. The second shield is for safer technology. The federal government will have software treated like any other consumer product that is deemed insecure. "So, in 2030 our vision for safe technology is a world where we have clear global standards for digital safety in products that will help us drive the development of security into those products from their very inception," O'Neil said. ... The fourth proposed shield will focus on protecting Australian's access to critical infrastructure, with the Home Affairs and Cybersecurity minister saying that "part of this year will be about government lifting up its own cyber defences to make sure we're protecting our country."


Modeling Asset Protection for Zero Trust – Part 2

The goal when modeling the data environment for a Zero Trust initiative is to have the information available to decide what data should be available when, where, and by whom. That requires you to know what data you have, its value to the business, and the risk level if lost. The information is used to inform an automated rules engine that enforces governance based on the state of the data request journey. It is not to define or modify a data model. Hopefully, you already have this information catalogued. From a digital asset perspective, most companies think of their data as their crown jewels so the data pillar might be the most important pillar. One challenge with data is that applications supply data access. Many applications are not written to support modern authentication mechanisms and don’t handle the protocols needed to integrate with contemporary data environments so the applications might not support a Zero Trust data model. Hopefully, you’re already experimenting with current mechanisms for your microservice environment. But, if not, as with any elephant, you eat it one bite at a time.



Quote for the day:

"Your time is limited, so don't waste it living someone else's life." -- Steve Jobs

Daily Tech Digest - September 20, 2023

Innovation needs to be a culture, not just a practice

It is important to build open organisational structures that let teams avoid obstacles and hierarchies that frequently stifle creativity. An inventive culture places a strong emphasis on being flat and agile. Employees are more able to freely communicate their thoughts when they have direct access to decision-makers. The well-known sportswear company Nike is one example of this. All levels of staff members are welcome to work together on cutting-edge concepts and technologies at the company's "Innovation Kitchen." This open mindset has produced ground-breaking goods like the Nike Flyknit, which transformed the athletic footwear market. ... Most businesses have started encouraging the participation of employees across sectors in brainstorming sessions to think outside the box because they respect unusual thinking and believe there are no negative ideas. But in some circumstances, one should be ready to also support the genuinely absurd. Innovation requires a space where creativity can thrive.


Quantum Plus AI Widens Cyberattack Threat Concerns

The mind-boggling speed of quantum computing is a double-edged sword, however. On one hand, it helps solve difficult mathematical problems much faster. On the other, it would increase the cyberattack capabilities beyond comprehension. “When you marry quantum computing and AI together, you can have an exponential increase in the advantages that both can offer,” said Dana Neustadter, director of product management for security IP at Synopsys. “Quantum computing will be able to enhance AI accuracy, speed, and efficiency. Enhancing AI can be a game changer for the better for many reasons. Paired with quantum computing, AI will have greater ability to solve very complex problems. As well, it will analyze huge amounts of data needed to take decisions or make predictions more quickly and accurately than conventional AI.” Very efficient and resilient solutions for threat detection and secure management can be created with enhanced AI, transforming cybersecurity as we know it today. “However, if used for the wrong reasons, these powerful technologies also can threaten cybersecurity,” Neustadter said.


IoT startups fill security gaps

Insider risks have long been one of the most difficult cybersecurity threats to mitigate. Not only can power users, such as C-level executives, overrule IT policies, but partners and contractors often get streamlined access to corporate resources, and may unintentionally introduce risks in the name of expediency. As IoT continues to encompass such devices as life-saving medical equipment and self-driving vehicles, even small risks can metastasize into major security incidents. For San Francisco-based self-driving car startup Cruise, a way to mitigate the many risks associated with connected cars is to conduct thorough risk assessments of partners and suppliers. The trouble is that third-party assessments were such a time-consuming and cumbersome chore that the existing process was not able to scale as the company grew. “The rise in cloud puts a huge stress on understanding the risk posture of our partners. That is a complex and non-trivial thing. Partnerships are always under pressure,” said Alexander Hughes, Director of Information Security, Trust, and Assurance at Cruise.


Expert: Keep Calm, Avoid Overhyping China's AI Capabilities

"Some of China's bottlenecks relate to a reliance on Western companies to open up new paradigms, China's censorship regime, and computing power bottlenecks," Ding said. "I submitted three specific policy recommendations to the committee, but I want to emphasize one, which is, 'Keep calm and avoid overhyping China's AI capabilities.'" Policymakers also erroneously think anything that helps China around artificial intelligence is going to hurt the U.S. even though giants in China's AI industry like ByteDance, Alibaba and Baidu end up generating a lot of profits that come back into the U.S. economy and hopefully get reinvested into American productivity, according to Ding. "It's a more difficult question than just, 'Any investment in China's AI sector means it's harmful to U.S. national security,'" Ding said. "Continuing to maintain the openness of these global innovation networks is always going to favor the U.S. in the long-run in terms of our ability to run faster."


Beyond Spreadsheets: How Data-Driven Organizations Outperform the Rest

Creating a data-driven culture must start at the executive level to drive the understanding that data is central to the operations and success of your organization, as well as to decision-making at every level. It begins with communicating the importance of data, making it a corporate initiative. From there must follow implementing the data infrastructure and analytics tools that enable every role to get the data needed to drive evidenced-based decision-making. There is no right or wrong organizational structure to create a data-driven culture. Still, creating and assigning roles and responsibilities that will work for your organization, and then staffing and training accordingly, are essential. You may choose to train most of your staff to understand and support analytics, or you may rely on a few for performing analytics while conveying across your organization the overall importance and requirements of using data and analytics to drive desired results. 


Modeling Asset Protection for Zero Trust – Part 1

For operating your IT environment, the Security, Information, and Event Management (SIEM) system must be a good fit for the infrastructure. Once you have a complete inventory of your infrastructure, we recommend you complete an architectural-level evaluation of your SIEM to ensure good alignment. ... The evaluation should include the cost of setup and three years of operations, evaluation of organizational competence and available training for each, and the features of each against your IT landscape. As you evaluate your SIEM environment, consider evaluating your Extended Detection and Response (XDR) capability and performing a similar architectural evaluation. You might consider this part of your SIEM solution or treat it separately and it might be operated by a separate group. XDR also might not fit well into any pillar evaluation so could be overlooked if not captured here. Zero Trust requires identification and valuation of all information technology (IT) assets, automated enforcement of governance, and automated detection, response, and remediation to threats and attacks.


Data Engineer vs. Data Analyst

Data engineers play a pivotal role in establishing and maintaining robust Data Governance practices. They are responsible for designing and implementing data pipelines, ensuring that data is collected, stored, and processed accurately. By implementing rigorous quality checks during the extract, Transform, load (ETL) process, they guarantee that the data is clean and reliable for analysis. On the other hand, data analysts rely on high-quality and trustworthy data to derive meaningful insights. They work closely with the data engineer to define standards for data collection, storage, and usage. ... So, a crucial similarity between data engineers and data analysts is their shared emphasis on teamwork and collaboration. Both roles recognize that combining their expertise can lead to more accurate insights and better decision-making. Moreover, teamwork enables knowledge sharing between data engineers and analysts. They can exchange ideas, techniques, and best practices, enhancing their individual skill sets while collectively driving innovation in Data Management and analysis.


What AppSec and developers working in cloud-native environments need to know

With the emergence of IaaS, PaaS, and IaaS models, the definition of an application extends to include the associated runtime environment and the underlying infrastructure. Applications are now not just bundles of code, but holistic systems that include the virtualized hardware resources, operating systems, databases, and network configurations they rely on. The advent of microservices and containerization, where an application can consist of many independently deployable components each running in its own environment, further complexifies this definition. In a cloud-native application, each microservice with its code, dependencies, and environment could be considered an “application” in its own right. The introduction of Infrastructure as Code (IaC) has further complicated the definition of applications. IaC is the practice of managing and provisioning infrastructure through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.


Could the DOJ’s Antitrust Trial vs Google Drive More Innovation?

The thought process among regulators, he says, might be that the antitrust case against Microsoft brought about change and created opportunities for more competition -- a similar attempt with Google may be worth the effort. “This particular antitrust case really focuses narrowly on the company’s popular search engine, and it alleges that Google uses their 90% market share to illegally throttle competition in search and search advertising,” Kemp says. CTO Jimmie Lee with XFactor.io, a developer of a business decision platform, says he can understand some of big tech’s perspective having come from Meta, Facebook’s parent, and Microsoft. “When you’re in the company, it feels very different from being on the outside,” he says. “From the inside, you see the strength of the technology and how you can better add security and privacy and features and functionalities throughout the entire stack and workflow.”


4 steps for purple team success

Purple teaming is a function of collaborative security. Historically, it has literally brought together offensive security engineers or pen testers from the red side of the team and investigators, detection engineers, and CTI analysts from the blue side of the team. More recently, however, purple teams have looked very different, including a variety of members including developers, architects, information system security officers, software engineers, DFIR teams, and BCP personnel as well as other departments. To view the purple team simply as a tactical unit would be an oversimplification. Beyond the immediate operational benefits, the true value of a purple team lies in fostering cyber resilience. It is about building an organizational capability that can not only withstand cyber threats but also adapt and recover swiftly from them. By collaboratively assessing, learning, and adapting, the purple team approach instills a resilience mindset, ensuring that the organization is prepared for evolving cyber threats and is capable of bouncing back even when breaches occur.



Quote for the day:

"If you don’t build your dream, someone else will hire you to help them build theirs." -- Dhirubhai Ambani

Daily Tech Digest - September 19, 2023

Experts: 'Quiet cutting' employees makes no sense, and it's costly

The practice involves reassigning workers to roles that don’t align with their career goals to achieve workforce reduction by voluntary attrition — allowing companies to avoid paying costly severance packages or unemployment benefits. “Companies are increasingly using role reassignments as a strategy to sidestep expensive layoffs,” said Annie Rosencrans, people and culture director at HiBob, a human resource platform provider. “By redistributing roles within the workforce, organizations can manage costs while retaining valuable talent, aligning with the current trend of seeking alternatives to traditional layoffs.” ... The optics around quiet cutting and its effects on employee morale is a big problem, however, and experts argue it’s not worth the perceived cost savings. Companies reassigning workers to jobs that may not fit their hopes for a career path or align with their skills can be demoralizing to remaining workers and lead to “disengagement,” according to Chertok. He argued that the quiet cutting trend isn’t necessarily intentional; it's more indicative of corporate America’s need to reprioritize how talent is moved around within an organization. 


Why We Need Regulated DeFi

One of DeFi´s greatest challenges are liquidity issues. In a decentralized exchange, liquidity is added and owned by users, who often abandon one protocol for another offering better rewards thus resulting in unstable liquidity on DeFi protocols. A liquidity pool is a group of digital assets gathered to facilitate automated and permissionless trading on a decentralized exchange platform. The users of such exchange platforms don’t rely on a third party to hold funds but transact with each other directly. ... There are many systemic risks currently present in DeFi. For example, potential vulnerabilities in smart contracts can expose users to security breaches. DeFi platforms are often interconnected, meaning a problem on one platform can quickly spread and impact others, potentially causing systemic failures. Another potential systemic risk is the manipulation or failure of oracles, which bring real-world data onto the blockchain. This can result in bad decisions and lead to losses. Ultimately, regulated DeFi can help enforce security standards, fostering trust among users.


Microsoft Azure Data Leak Exposes Dangers of File-Sharing Links

There are so many pitfalls in setting up SAS tokens that Wiz's Luttwak recommends against ever using the mechanism to share files from a private cloud storage account. Instead, companies should have a public account from which resources are shared, he says. "This mechanism is so risky that our recommendation is, first of all, never to share public data, within your storage account — create a completely separate storage account only for public sharing," Luttwak says. "That will greatly reduce the risk of misconfiguration. You want to share public data, create a public data externally storage account and use only that." For those companies that continue to want to share specific files from private storage using SAS URLs, Microsoft has added the capability as part of GitHub's monitoring of the exposure of credentials and secrets. The company has rescanned all repositories, the company stated in its advisory. Microsoft recommends that Azure users limit themselves to short-lived SAS tokens, apply the principle of least privilege, and have a revocation plan.


Chaos Engineering: Path To Build Resilient and Fault-Tolerant Software Applications

The objective of chaos engineering is to unearth system restraints, susceptibilities, and possible failures in a controlled and planned manner before they exhibit perilous challenges resulting in severe impact on the organizations. Few of the most innovative organizations based on learning from past failures understood the importance of chaos engineering and realized it as a key strategy to unravel profound hidden issues to avoid any future failures and impacts on business. Chaos engineering lets the application developers forecast and detect probable collapses by disrupting the system on purpose. The disruption points are identified and altered based on potential system vulnerabilities and weak points. This way the system deficiencies are identified and fixed before they degrade into an outage. Chaos engineering is a growing trend for DevOps and IT teams. A few of the world’s most technologically innovative organizations like Netflix and Amazon are pioneers in adopting chaos testing and engineering. 


Unregulated DeFi services abused in latest pig butchering twist

At first glance, the pig butchering ring tracked by Sophos operates in much the same way as a legitimate one, establishing pools of cryptocurrency assets and adding new traders – or, in this case, victims – until such time as the cyber criminals drain the entire pool for themselves. This is what is known as a rug-pull. ... “When we first discovered these fake liquidity pools, it was rather primitive and still developing. Now, we’re seeing shā zhū pán scammers taking this particular brand of cryptocurrency fraud and seamlessly integrating it into their existing set of tactics, such as luring targets over dating apps,” explained Gallagher. “Very few understand how legitimate cryptocurrency trading works, so it’s easy for these scammers to con their targets. There are even toolkits now for this sort of scam, making it simple for different pig butchering operations to add this type of crypto fraud to their arsenal. While last year, Sophos tracked dozens of these fraudulent ‘liquidity pool’ sites, now we’re seeing more than 500.”


Time to Demand IT Security by Design and Default

Organizations can send a strong message to IT suppliers by re-engineering procurement processes and legal contracts to align with secure by design and security by default approaches. Updates to procurement policies and processes can set explicit expectations and requirements of their suppliers and flag any lapses. This isn’t about catching vendors out – many will benefit from the nudge. Changes in procurement assessment criteria can be flagged to IT suppliers in advance to give them a chance to course-correct. Suppliers can then be assessed against these yardsticks. If they fail to measure up, organizations have a clear justification to stop doing business with them. The next step is to create liability or penalty clauses in contracts that force IT vendors to share security costs for fixes or bolt-ons. This will drive them to devote more resources to security and prevent rather than scramble to cure security risks. Governments can support this by introducing laws that make it easier to claim under contracts for poor security. 


DeFi as a solution in times of crisis

The collapse of Silicon Valley Bank in March 2023 shows that even large banks are still vulnerable to failure. But instead of requiring trust that their money is still there, Web3 users can verify their holdings directly on chain. Additionally, blockchain technology allows for a more efficient and decentralized financial landscape. The peer-to-peer network pioneered by Bitcoin means that investors can hold their own assets and transact directly with no middlemen and significantly lower fees. And unlike with traditional banks, the rise of DeFi sectors like DEXs, lending and liquid staking means individuals can now have full control over exactly how their deposited assets are used. Inflation is yet another ongoing problem that crypto and DeFi help solve. Unlike fiat currencies, cryptocurrencies like bitcoin have a fixed total supply. This means that your holdings in BTC cannot be easily diluted like if you hold a currency such as USD. While a return to the gold standard of years past is sometimes proposed as a potential solution to inflation, adopting crypto as legal tender would have a similar effect while also delivering a range of other benefits like enhanced efficiency.


Cyber resilience through consolidation part 1: The easiest computer to hack

Most cyberattacks succeed because of simple mistakes caused by users, or users not following established best practices. For example, having weak passwords or using the same password on multiple accounts is critically dangerous, but unfortunately a common practice. When a company is compromised in a data breach, account details and credentials can be sold on the dark web and attackers then attempt the same username-password combination on other sites. This is why password managers, both third-party and browser-native, are growing in utilization and implementation. Two-factor authentication (2FA) is also growing in practice. This security method requires users to provide another form of identification besides just a password — usually via a verification code sent to a different device, phone number or e-mail address. Zero trust access methods are the next step. This is where additional data about the user and their request is analyzed before access is granted. 


AI for Developers: How Can Programmers Use Artificial Intelligence?

If you write code snippets purely by hand, it is prone to errors. If you audit existing code by hand, it is prone to errors. Many things that happen during software development are prone to errors when they’re done manually. No, AI for developers isn’t completely bulletproof. However, a trustworthy AI tool can help you avoid things like faulty code writing and code errors, ultimately helping you to enhance code quality. ... AI is not 100% bulletproof, and you’ve probably already seen the headlines: “People Are Creating Records of Fake Historical Events Using AI“; “Lawyer Used ChatGPT In Court — And Cited Fake Cases. A Judge Is Considering Sanctions“; “AI facial recognition led to 8-month pregnant woman’s wrongful carjacking arrest in front of kids: lawsuit.” This is what happens when people take artificial intelligence too far and don’t use any guardrails. Your own coding abilities and skill set as a developer are still absolutely vital to this entire process. As much as software developers might love to completely lean on an AI code assistant for the journey, the technology just isn’t to that point.


The DX roadmap: David Rogers on driving digital transformation success

companies mistakenly think that the best way to achieve success is by committing a lot of resources and focusing on implementation at all costs with the solution they have planned. Many organizations get burned by this approach because they don’t realize that markets are shifting fast, new technologies are coming in rapidly, and competitive dynamics are changing swiftly in the digital era. For example, CNN decided to get into digital news after looking at many benchmarks and reading several reports, thinking subscribers will pay monthly for a standalone news app. It was a disaster and they shut down the initiative within a month. To overcome this challenge, companies must first unlearn the habit of assuming things they know that they don’t know and are trying to manage through planning. They should rather manage through experimentation. CIOs can help their enterprises in this area. They must bring what they have learned in their evolution towards agile software development over the years and help apply these rules of small teams, customer centricity, and continuous delivery to every part of the business.



Quote for the day:

"Strategy is not really a solo sport _ even if you_re the CEO." -- Max McKeown