Daily Tech Digest - April 26, 2024

Counting the Cost: The Price of Security Neglect

In the perfect scenario, the benefits of a new security solution will reduce the risk of a cyberattack. But, it’s important to invest with the right security vendor. Any time a vendor has access to a company’s systems and data, that company must assess whether the vendor’s security measures are sufficient. The recent Okta breach highlights the significant repercussions of a security vendor breach on its customers. Okta serves as an identity provider for many organizations, enabling single sign-on (SSO). An attacker gaining access to Okta’s environment could potentially compromise user accounts of Okta customers. Without additional access protection layers, customers may become vulnerable to hackers aiming to steal data, deploy malware, or carry out other malicious activities. When evaluating the privacy risks of security investments, it’s important to consider an organization’s security track record and certification history. ... Security and privacy leaders can bolster their case for additional investments by highlighting costly data breaches, and can tilt the scale in their favor by seeking solutions with strong records in security, privacy, and compliance.


Is Your Test Suite Brittle? Maybe It’s Too DRY

DRY in test code often presents a similar dilemma. While excessive duplication can make tests lengthy and difficult to maintain, misapplying DRY can lead to brittle test suites. Does this suggest that the test code warrants more duplication than the application code? A common solution to brittle tests is to use the DAMP acronym to describe how tests should be written. DAMP stands for "Descriptive and Meaningful Phrases" or "Don’t Abstract Methods Prematurely." Another acronym (we love a good acronym!) is WET: "Write Everything Twice," "Write Every Time," "We Enjoy Typing," or "Waste Everyone’s Time." The literal definition of DAMP has good intention - descriptive, meaningful phrases and knowing the right time to extract methods are essential when writing software. However, in a more general sense, DAMP and WET are opposites of DRY. The idea can be summarized as follows: Prefer more duplication in tests than you would in application code. However, the same concerns of readability and maintainability exist in application code as in test code. Duplication of concepts causes the same problems of maintainability in test code as in application code.


PCI Launches Payment Card Cybersecurity Effort in the Middle East

The PCI SSC plans to work closely with any organization that handles payments within the Middle East payment ecosystem, with a focus on security, says Nitin Bhatnagar, PCI Security Standards Council regional director for India and South Asia, who will now also oversee efforts in the Middle East. "Cyberattacks and data breaches on payment infrastructure are a global problem," he says. "Threats such as malware, ransomware, and phishing attempts continue to increase the risk of security breaches. Overall, there is a need for a mindset change." The push comes as the payment industry itself faces significant changes, with alternatives to traditional payment cards taking off, and as financial fraud has grown in the Middle East. ... The Middle East is one region where the changes are most pronounced. Middle East consumers prefer digital wallets to cards, 60% to 27%, as their most preferred method of payment, while consumers in the Asia-Pacific region slightly prefer cards, 43% to 38%, according to an August 2021 report by consultancy McKinsey & Company.


4 ways connected cars are revolutionising transportation

Connected vehicles epitomize the convergence of mobility and data-driven technology, heralding a new era of transportation innovation. As cars evolve into sophisticated digital platforms, the significance of data management and storage intensifies. The storage industry must remain agile, delivering solutions that cater to the evolving needs of the automotive sector. By embracing connectivity and harnessing data effectively, stakeholders can unlock new levels of safety, efficiency, and innovation in modern transportation. ... Looking ahead, connected cars are poised to transform transportation even further. As vehicles become more autonomous and interconnected, the possibilities for innovation are limitless. Autonomous driving technologies will redefine personal mobility, enabling efficient and safe transportation solutions. Data-driven services will revolutionise vehicle ownership, offering personalised experiences tailored to individual preferences. Furthermore, the integration of connected vehicles with smart cities will pave the way for sustainable and efficient urban transportation networks.


Looking outside: How to protect against non-Windows network vulnerabilities

Security administrators running Microsoft systems spend a lot of time patching Windows components, but it’s also critical to ensure that you place your software review resources appropriately – there’s more out there to worry about than the latest Patch Tuesday. ... Review the security and patching status of your edge, VPN, remote access, and endpoint security. Each of these endpoint software has been used as an entryway into many governments and corporate networks. Be prepared to immediately patch and or disable any of these software tools at a moment’s notice should the need arise. Ensure that you have a team dedicated to identifying and tracking resources to help alert you to potential vulnerabilities and attacks. Resources such as CISA can keep you alerted as can making sure you are signed up for various security and vendor alerts as well as having staff that are aware of the various security discussions online. These edge devices and software should always be kept up to date and you should review life cycle windows as well as newer technology and releases that may decrease the number of emergency patching sessions your Edge team finds themselves in.


Application Delivery Controllers: A Key to App Modernization

As the infrastructure running our applications has grown more complex, the supporting systems have evolved to be more sophisticated. Load balancers, for example, have been largely superseded by application delivery controllers (ADCs). These devices are usually placed in a data center between the firewall and one or more application servers, an area known as the demilitarized zone (DMZ). While first-generation ADCs primarily handled application acceleration and load balancing between servers, modern enterprise ADCs have considerably expanded capabilities and have evolved into feature-rich platforms. Modern ADCs include such capabilities as traffic shaping, SSL/TLS offloading, web application firewalls (WAFs), DNS, reverse proxies, security analytics, observability and more. They have also evolved from pure hardware form factors to a mixture of hardware and software options. One leader of this evolution is NetScaler, which started more than 20 years ago as a load balancer. In the late 1990s and early 2000s, it handled the majority of internet traffic. 


Curbing shadow AI with calculated generative AI deployment

IT leaders countered by locking down shadow IT or making uneasy peace with employees consuming their preferred applications and compute resources. Sometimes they did both. Meanwhile, another unseemly trend unfolded, first slowly, then all at once. Cloud consumption became unwieldy and costly, with IT shooting itself in the foot with misconfigurations and overprovisioning among other implementation errors. As they often do when investment is measured versus business value, IT leaders began looking for ways to reduce or optimize cloud spending. Rebalancing IT workloads became a popular course correction as organizations realized applications may run better on premises or in other clouds. With cloud vendors backtracking on data egress fees, more IT leaders have begun reevaluating their positions. Make no mistake: The public cloud remains a fine environment for testing and deploying applications quickly and scaling them rapidly to meet demand. But it also makes organizations susceptible to unauthorized workloads. The growing democratization of AI capabilities is an IT leader’s governance nightmare. 


CIOs eager to scale AI despite difficulty demonstrating ROI, survey finds

“Today’s CIOs are working in a tornado of innovation. After years of IT expanding into non-traditional responsibilities, we’re now seeing how AI is forcing CIOs back to their core mandate,” Ken Wong, president of Lenovo’s solutions and services group, said in a statement. There is a sense of urgency to leverage AI effectively, but adoption speed and security challenges are hindering efforts. Despite the enthusiasm for AI’s transformative potential, which 80% of CIOs surveyed believe will significantly impact their businesses, the path to integration is not without its challenges. Notably, large portions of organizations are not prepared to integrate AI swiftly, which impacts IT’s ability to scale these solutions. ... IT leaders also face the ongoing challenge of demonstrating and calculating the return on investment (ROI) of technology initiatives. The Lenovo survey found that 61% of CIOs find it extremely challenging to prove the ROI of their tech investments, with 42% not expecting positive ROI from AI projects within the next year. One of the main difficulties is calculating ROI to convince CFOs to approve budgets, and this challenge is also present when considering AI adoption, according to Abhishek Gupta, CIO of Dish TV. 


AI Bias and the Dangers It Poses for the World of Cybersecurity

Without careful monitoring, these biases could delay threat detection, resulting in data leakage. For this reason, companies combine AI’s power with human intelligence to reduce the bias factor shown by AI. The empathy element and moral compass of human thinking often prevent AI systems from making decisions that could otherwise leave a business vulnerable. ... The opposite could also occur, as AI could label a non-threat as malicious activity. This could lead to a series of false positives that cannot even be detected from within the company. ... While some might argue that this is a good thing because supposedly “the algorithm works,” it could also lead to alert fatigue. AI threat detection systems were added to ease the workload in the human department, reducing the number of alerts. However, the constant red flags could cause more work for human security providers, giving them more tickets to solve than they originally had. This could lead to employee fatigue and human error and take away the attention from actual threats that could impact security.


The Peril of Badly Secured Network Edge Devices

The biggest risks involved anyone using internet-exposed Cisco Adaptive Security Appliance devices, who were five times more likely than non-ASA users to file a claim. Users of internet-exposed Fortinet devices were twice as likely to file a claim. Another risk comes in the form of Remote Desktop Protocol. Organizations with internet-exposed RDP filed 2.5 times as many claims as organizations without it, Coalition said. Mass scanning by attackers, including initial access brokers, to detect and exploit poorly protected RDP connections remains rampant. The sheer quantity of new vulnerabilities coming to light underscores the ongoing risks network edge devices pose. ... Likewise for Cisco hardware: "Several critical vulnerabilities impacting Cisco ASA devices were discovered in 2023, likely contributing to the increased relative frequency," Coalition said. In many cases, organizations fail to patch these vulnerabilities, leaving them at increased risk, including by attackers targeting the Cisco AnyConnect vulnerability, designated as CVE-2020-3259, which the vendor first disclosed in May 2020.



Quote for the day:

"Disagree and commit is a really important principle that saves a lot of arguing." -- Jeff Bezos

Daily Tech Digest - April 25, 2024

The rise in CISO job dissatisfaction – what’s wrong and how can it be fixed?

“The reason for dissatisfaction is the lack of executive management support,” says Nikolay Chernavsky, CISO of ISSQUARED, which provides managed IT and security services as well as software products. He says he hears CISOs voice frustrations when their views on required security measures and acceptable risk are dismissed; when the board and CEO don’t define their positions on those issues; or when those leaders don’t recognize the CISOs work in reducing risk — especially as the CISO faces more accountability and liability. Understandably, CISOs shy away from interview requests to publicly share their frustrations on these issues. However, the IANS Research report speaks to these points, noting, for example, that only 36% of CISOs said they have clear guidance from their board on their risk tolerance. Adding to these issues today is the liability that CISOs now face with the new US Securities and Exchange Commission (SEC) cyber disclosure rules as well as other regulatory and legal requirements. That increased liability is coupled with the fact that many CISOs are not covered by their organization’s directors and officers (D&O) liability insurance.


How CIOs align with CFOs to build RevOps

CIOs who transition IT from being a cost center to being a driver of innovation, transformation, and new revenues, can become the leaders that the new economy needs. “We used to say that business runs technology,” says David Kadio-Morokro, EY Americas financial services innovation leader. “You tell me what you want, and I’ll code it and support you.” Now it’s switched, he says. “I really believe technology drives the business, because it’s going to impact business strategy and how the business survives,” he adds, and gen AI will force companies to rethink the value of their organizations to customers. “Developing and envisioning an AI-driven strategy is absolutely part of the equation,” he says. “And the CIO has this role of enabling these components, and they need to be part of the conversation and be able to drive that vision for the organization.” The CIO is also in a position to help the CFO evolve, too. CFOs are traditionally risk averse and expect certainty and accuracy from their technology. Not only is gen AI still a new and experimental technology that’s evolving quickly but is, by its very nature, probabilistic and nondeterministic.


Do you need to repatriate from the cloud?

It should be no surprise that repatriation has gained this hype. Cloud, which grew to maturity during an economic boom, is for the first time under downward pressure as companies seek to reduce spending. Amazon, Google, Microsoft, and other cloud providers have feasted on their customers’ willingness to spend. But the willingness has been tempered now by budget cuts. ... Transitioning back to on-premises is a heavy lift, and one that is hard to rescind should things go badly. And savings is yet to be seen until after the transition is complete. Switching to WebAssembly-powered serverless functions, in contrast, is less expensive and less risky. Because such functions can run inside of Kubernetes, the savings thesis can be tested merely by carving off a few representative services, rewriting them, and analyzing the results. Those already invested in a microservice-style architecture are already well setup to rebuild just fragments of a multi-service application. Similarly, those invested in event processing chains like data transformation pipelines will also find it easy to identify a step or two in a sequence that can become the testbed for experimentation.


ONDC’s blockchain is a Made-in-India visioning of global digital public infrastructures

ONDC Confidex is a transformative shift towards decentralised trust. Anchored in the blockchain’s nativity, this shift promotes a value exchange network of networks that enables the reuse of continuously assured data that is traceable, reliable, secure, transparent and immutable. Confidex provides a transparent ledger that tracks every phase in the supply chain from production to end consumption. This level of detail not only fosters trust but also aligns with the broader vision of creating a global standard for ensuring product history’s authenticity—a core aspect of continuous data assurance. In the realm of digital transactions, the reliability of data underpins the foundation of trust. Confidex enables data certainty, making each transaction verifiable and immutable. This paves the way for friction-free interactions within digital marketplaces, ensuring that every piece of data holds its integrity from the point of creation to consumption. The digital economy is plagued with issues of forgery and duplication. Confidex addresses this head-on by creating unique digital records that are impossible to replicate or alter. 


How will AI-driven solutions affect the business landscape?

Redmond believes that the tech will quickly become embedded in normal business practice. “We won’t even think about asking gen AI to draft emails or documents or to generate images for our presentations.” He’s also looking forward to seeing how AI-driven video technology plays out, particularly OpenAI’s Sora. “I know that a lot of people in content generation are nervous about these tools replacing them, but I don’t think we hire an artist for their ability to draw, we hire them for their ability to draw what is in their imagination, and that is where their genius lies,” he says. “I am not sure that artists will ever stop creating wonderful works, and these technologies will just enhance that.” Tiscovschi agrees with Redmond’s outlook, stating that “this is just the beginning”. “We will continuously see more teams of humans and their AI agents or tools working together to achieve tasks,” he says. “A human quickly mining their organisation’s IP, automating repetitive tasks and then collaborating with their AI copilot on a report or piece of code will have a constantly growing multiplier on their productivity.”


5 Strategies for Better Results from an AI Code Assistant

The first step is to provide the GPT with high-level context. In her scenario, Phil demonstrates by building a Markdown editor. Since Copilot has no idea of the context, he has to provide it and he does this with a large prompt comment with step-by-step instructions. For instance, he tells the copilot, “Make sure we have support for bold, italics and bullet points” and “Can you use reactions in the React markdown package.” The prompt enables Copilot to create a functional but unsettled markdown editor. ... Follow up by providing the Copilot with specific details, Scarlett advised. “If he writes a column that says get data from [an] API, then GitHub Copilot may or may not know what he’s really trying to do, and it may not get the best result. It doesn’t know which API he wants to get the data from or what it should return,” Scarlett said. “Instead, you can write a more specific comment that says use the JSON placeholder API, pass in user IDs, and return the users as a JSON object. That way we can get more optimal results.”


ESG research unveils critical gaps in responsible AI practices across industries

In light of the ESG Research findings, Qlik recognises the imperative of aligning AI technologies with responsible AI principles. The company’s initiatives in this area are grounded in providing robust data management and analytics capabilities, essential for any organisation aiming to navigate the complexities of AI responsibly. Qlik underscores the importance of a solid data foundation, which is critical for ensuring transparency, accountability, and fairness in AI applications. Qlik’s commitment to responsible AI extends to its approach to innovation, where ethical considerations are integrated into the development and deployment of its solutions. By focusing on creating intuitive tools that enhance data literacy and governance, Qlik aims to address key challenges identified in the report, such as ensuring AI explainability and managing regulatory compliance effectively. Brendan Grady, General Manager, Analytics Business Unit at Qlik, said, “The ESG Research echoes our stance that the essence of AI adoption lies beyond technology—it’s about ensuring a solid data foundation for decision-making and innovation. 


Applying DevSecOps principles to machine learning workloads

Unlike in a conventional software development environment with an integrated development environment (IDE), data scientists typically write code using Jupyter Notebooks. This takes place outside of an IDE, and often outside of the traditional DevSecOps lifecycle. As a result, it’s possible for a data scientist who is not trained on secure development techniques to put sensitive data at risk, by storing unprotected secrets or other sensitive information in a notebook. Simply put, the same tools and protections used in the DevSecOps world aren’t effective for ML workloads. The complexity of the environment also matters. Conventional development cycles usually lead directly to a software application interface or API. In the machine learning space, the focus is iterative, building a trainable model that leads to better outcomes. ML environments produce large quantities of serialized files necessary for a dynamic environment. The upshot? Organizations can become overwhelmed by the inherent complexities of versioning and integration.


Introducing Wi-Fi 7 access points that deliver more

This idea that the access point (AP) can do more than just route traffic is a core part of our product philosophy, and we’ve consistently expanded on that over multiple Wi-Fi generations with the addition of location services, IoT protocol support, and extensive network telemetry for security and AIOps. As organizations continue to innovate, and leverage applications that require more bandwidth or more IoT devices to support new digital use cases, the AP must continue to do more. Delivering solutions that go beyond standards is part of HPE Aruba Networking’s history and future. Now, with the introduction of 700 series access points that support Wi-Fi 7, we are doubling IoT capabilities with dual BLE 5.4 or 802.15.4/Zigbee radios and dual USB interfaces and improving location precision for use cases such as asset tracking and real-time inventory tracking. Moreover, we are using both the resources and the management of the AP to its full potential by delivering ubiquitous high-performance connectivity and processing at the edge. What this means is that these access points not only have optimal support for the 2.4, 5, and 6 GHz spectrum but also enough memory and compute capacity to run containers.


Why Your Enterprise Should Create an Internal Talent Marketplace

Strategically, an internal talent marketplace is a way to empower employees to be in the driver’s seat of their career journey, says Gretchen Alarcon, senior vice president and general manager of employee workflows at software and cloud platform provider ServiceNow, via email. "Tactically, it's a platform driven by technology that uses AI to match existing talent to open roles or projects within the organization," she explains. "It provides a more transparent view of new opportunities for employees and identifies untapped employee potential based on skills rather than anecdotes." ... A talent marketplace is only as good as the information it contains, Williamson warns. "Organizations should emphasize to employees that it's in their interest to keep the skills and preferences in their profiles up to date," he says. Managers. meanwhile, need to define the exact critical skills needed to be successful in a particular job or role. "That information drives recommended opportunities for employees and increases their chances of being identified by project managers to fill roles."



Quote for the day:

"Rarely have I seen a situation where doing less than the other guy is a good strategy." -- Jimmy Spithill

Daily Tech Digest - April 24, 2024

The shift towards a combined framework for API threat detection and the protection of vital business applications signals a move to proactive and responsive security. ... Companies cannot afford to underestimate the threat bots pose to their API-driven applications and infrastructure. Traditional silos between fraud and security teams create dangerous blind spots. Fraud detection often lacks visibility into API-level attacks, while API security tools may overlook fraudulent behavior disguised as legitimate traffic. This disconnect leaves businesses vulnerable. By integrating fraud detection, API security, and advanced bot protection, organizations create a more adaptive defense. This proactive approach offers crucial advantages: swift threat response, the ability to anticipate and mitigate vulnerabilities exploited by bots and other malicious techniques, and an in-depth understanding of application abuse patterns. These advantages lead to more effective threat identification and neutralization, combating both low-and-slow attacks and sudden volumetric attacks from bots.


Fortifying the Software Supply Chain

Firstly, it enhances security and compliance by consolidating code repositories in a single, cloud-based platform. This allows organizations to gain better control over access permissions and enforce consistent security policies across the entire codebase. Centralized environments can be configured to comply with industry standards and regulations automatically, reducing the risk of breaches that could disrupt the supply chain. As Jen Easterly, Director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), emphasizes, centralizing source code in the cloud aligns with the goal of working with the open source community to ensure secure software while reaping its benefits. Secondly, cloud-based centralization fosters improved collaboration and efficiency among development teams. With a centralized platform, teams can collaborate in real-time, regardless of their geographical location, facilitating faster decision-making and problem-solving. ... Thirdly, centralized cloud environments offer enhanced reliability and disaster recovery capabilities. Cloud providers typically replicate data across multiple locations, ensuring that a failure in one area does not result in data loss.


GenAI can enhance security awareness training

Social engineering is fundamentally all about psychology and putting the victim in a situation where they feel under pressure to make a decision. Therefore, any form of communication that imparts a sense of urgency and makes an unusual request needs to be flagged, not immediately responded to, and subjected to a rigorous verification process. Much like the concept of zero trust, the approach should be “never trust, always verify”, and the education process should outline the steps that should be taken following an unusual request. For instance, in relation to CFO fraud, the accounts department should have a set limit for payments and exceeding these should trigger a verification process. This might see staff use a token-based system or authenticator to verify the request is legitimate. Secondly, users need to be aware of oversharing. Is there a company policy to prevent information being divulged over the phone? Restrictions on the posting of company photos that an attacker could exploit? Could social media posts be used to guess passwords or an individual’s security questions? Such steps can reduce the likelihood of digital details being mined.


AI and Human Capital Management: Bridging the Gap Between HR and Technology

Amid the strides of technology, the relevance of the human factor remains important for the human resource professional. They grapple with the challenge of seamlessly blending advanced technology with the distinctive human elements that define their workforce. In this contemporary landscape, HR assumes a novel role as a vital link between the embrace of technology and the preservation of human relations. The shift to this new way of working requires an appropriate use of technology to support and enhance existing HR capabilities, thereby increasing their flexibility and effectiveness. ... Ethical considerations comprise one of the most significant challenges while applying AI in HR, which mandates the implementation of mechanisms like equal opportunities, fair decision-making, and transparency in AI utilization in carrying out the duties. Additionally, the integration of AI into HR operations necessitates modifying change management processes to provide reassurance, organize skill preparation and training programs, and cultivate the acceptance of AI in the organization. 


Differential Privacy and Federated Learning for Medical Data

Federated learning is a key strategy to build that trust backed up by the technology, not only on contracts and faith in ethics of particular employees and partners of the organizations forming consortia. First of all, the data remains at the source, never leaves the hospital, and is not being centralized in a single, potentially vulnerable location. Federated approach means there aren’t any external copies of the data that may be hard to remove after the research is completed. The technology blocks access to raw data because of multiple techniques that follow defense in depth principle. Each of them is minimizing the risk of data exposure and patient re-identification by tens or thousands of times. Everything to make it economically unviable to discover nor reconstruct raw level data. Data is minimized first to expose only the necessary properties to machine learning agents running locally, PII data is stripped, and we also use anonymization techniques. Then local nodes protect local data against the so-called too curious data scientist threat by allowing only the code and operations accepted by local data owners to run against their data.


CIO risk-taking 101: Playing it safe isn’t safe

As CIO, you’re in the risk business. Or rather, every part of your responsibilities entails risk, whether you’re paying attention to it or not. And in spite of the spate of books that extol risk-taking as the only smart path, it’s worth remembering that their authors don’t face what might be the biggest risk CIOs have to deal with every day: executive teams adept at preaching risk-taking without actually supporting it. ... ut the staff members and sponsor who annoy you the most in charge of these initiatives. Worst case they succeed, and people you don’t like now owe you a favor or two. Best case they fail and will be held accountable. You can’t lose. ... Those who encourage risk-taking often ignore its polysemy. One meaning: initiatives that, as outlined above, have potential benefit but a high probability of failure. The other: structural risks — situations that might become real and would cause serious damage to the IT organization and its business collaborators if they do. You can choose to not charter a risky initiative, ignoring and eschewing its potential benefits. When it comes to structural risks you can ignore them as well, but you can’t make them go away by doing so and will be blamed if they’re “realized”


Digital Personal Data Protection Act, 2023 - Impact on Banking Sector Outsourced Services

Regulated Entities must design their own privacy compliance program for application-based services – and not solely rely on a package provided by the SP. While the SP may add value and save costs, any solution it provides will likely be optimized for its own efficiency. Customer data management practices can differentiate a business from competitors, enhance customer trust, and provide a competitive advantage. Also, financial penalties under the DPDP are high, extending up to INR 250 crores (on the Regulated Entity as the ‘data fiduciary’ and not on the processor), apart from the reputational damage a breach or prosecution can cause, making it critical to have thorough oversight over the SP vis-à-vis privacy protection. ... For consumer facing services, in addition to security, SPs must technically ensure that the client can comply with its DPDP obligations such as data access requests, erasure, correction and updating personal data, consent withdrawal. Also, the platform should be capable of integrating with consent managers.


Why Is a Data-Driven Culture Important?

Trust and commitment are two important features in a data-driven culture. Trust in the data is exceptionally important, but trust in other staff, for purposes of collaboration and teamwork, is also quite important. Dealing with internal conflicts and misinformation disrupts the smooth flow of doing business. There are a number of issues to consider when creating a data-driven culture. ... In a data-driven culture, everyone should be involved, and this should be communicated to staff and management (exceptions are allowed, for example, the janitor). Everyone using data in doing their job should understand they are also creating data that can be used later for research. When people understand their roles, they can work together as an efficient team to find and eliminate sources of bad data. The process of locating and repairing sources of poor-quality data acts as an educational process for staff and empowers them to be proactive, taking responsibility when they notice a data flow problem. Shifting to a data-driven culture may result in having to hire a few specialists – individuals who are skilled in Data Management, data visualization, and data analysis. 


Harnessing the Collective Ignorance of Solution Architects

One key benefit of adopting an architecture platform, and having different development teams contribute and maintain their designs in a shared model, is that higher levels of abstraction can gain a wide-angled view of the resulting picture. At the context level, the view becomes enterprise-wide, with an abstracted map of the entire application landscape, and how it is joined up, both from an IT perspective, and to its consuming organizational units, revealing its contribution to business products and services, value streams and business capabilities. ... by combining the abstracting power of a modern EA platform with the consistency and integrity of the C4 model approach, I have removed from their workload the Sysyphean task of hand-crafting an enterprise IT model and replaced it with the “collective ignorance” of an army of supporters who will construct and maintain that enterprise view out of their own interest. Guidance and encouragement are all that is required. The model will remain consistent with the aggregated truth of all solution designs because they are one and the same model, viewed from different angles with different degrees of “selective ignorance”.


5 Hard Truths About the State of Cloud Security 2024

"There's a fundamental misunderstanding of the cloud that somehow there's more security natively built into it, that you're more secure by going to the cloud just by the act of going to the cloud," he says. The problem is that while hyperscale cloud providers may be very good at protecting infrastructure, the control and responsibility over their customer's security posture they have is very limited. "A lot of people think they're outsourcing security to the cloud provider. They think they're transferring the risk," he says. "In cybersecurity, you can never transfer the risk. If you are the custodian of that data, you are always the custodian of the data, no matter who's holding it for you." ... "So much of the zero trust narrative is about identity, identity, identity," Kindervag says. "Identity is important, but we consume identity in policy in zero trust. It's not the end-all, be-all. It doesn't solve all the problems." What Kindervag means is that with a zero trust model, credentials don't automatically give users access to anything under the sun within a given cloud or network. The policy limits exactly what and when access is given to specific assets. 



Quote for the day:

"Great achievers are driven, not so much by the pursuit of success, but by the fear of failure." -- Larry Ellison

Daily Tech Digest - April 23, 2024

Confronting the ethical issues of human-like AI

Regrettably, innovation leaders confronting the thorny ethical questions currently have little in the way of authoritative guidance to turn to. Unlike other technical professions such as civil and electrical engineering, computer science and software engineering currently lack established codes of ethics backed by professional licensing requirements. There is no widely adopted certification or set of standards dictating the ethical use of pseudoanthropic AI techniques. However, by integrating ethical reflection into their development process from the outset and drawing on lessons learned by other fields, technologists working on human-like AI can help ensure these powerful tools remain consistent with our values. ... Tech leaders must recognize that developing human-like AI capabilities is practicing ethics by another means. When it comes to human-like AI, every design decision has moral implications that must be proactively evaluated. Something as seemingly innocuous as using a human-like avatar creates an ethical burden. The approach can no longer be reactive, tacking on ethical guidelines once public backlash hits. 


AI in Platform Engineering: Concerns Grow Alongside Advantages

AI algorithms can automatically analyze past usage patterns, real-time demands and resource availability to allocate resources like servers, storage and databases. AI-powered platforms can ensure reliable infrastructure, eliminating the need for manual configuration and provisioning and saving platform engineers valuable time and effort. Since these platforms have been trained on vast data models that enable them to understand individual developer needs and preferences, they can provide resources when necessary. As a result, they can be used to customize development environments and generate configurations with minimal manual effort. Organizations gather an increasing amount of data daily. As a result, businesses must handle and manage a large amount of data and personal information, ensuring it remains secure and protected. Now teams can reduce the risk of noncompliance and associated penalties by automating crucial processes like records management and ensuring that tasks are carried out in compliance with industry governance protocols and standards, a plus in high-regulated markets.


Secrets of business-driven IT orgs

“The bottom line is that in today’s era of rapid technological innovation, IT teams are critical partners that teams all across the organization must rely on in order to meet and exceed their business goals,” says Mindy Lieberman, CIO of tech firm MongoDB. “A truly business-driven IT team shouldn’t just be aligned with business strategy, it should have a seat at the leadership table, have a hand in directing business strategy, and be brought in on any major transformational initiatives from the get-go.” To get that hand in directing business strategy, Lieberman created a base with “the right people and the right roadmap, [as well as] the right processes and technology to ensure agility and transparency.” She views IT’s agenda and the business agenda as one and the same. She has modernized operations and internal application infrastructure to ensure her tech team can be responsive to business and customer-facing needs. “Being a business-driven IT team means aligning the tools, processes, technology, and success metrics across an organization to ensure that we are aligned on the outcomes we are looking for and the strategy to deliver those outcomes,” Lieberman says.


Simplifying Intelligent Automation Adoption for Businesses of All Sizes

To maximize success, it's essential to prioritize high-impact processes and streamline repetitive tasks for instant efficiency boosts. When selecting processes for automation, assess their digitization level, stability, and exceptions to gauge implementation feasibility. It's also a must to collaborate with automation service providers to tailor solutions and ensure seamless integration. There must be transparency in the communication of automation goals and benefits across the organization, addressing concerns and fostering open dialogue for unified commitment and understanding. While the potential of intelligent automation is undeniable, the journey toward its successful implementation is a collaborative effort. By understanding the unique challenges faced by businesses of various sizes and actively addressing them, we can unlock the immense potential of this technology. Aligning automation initiatives with strategic goals ensures that efforts contribute directly to the organization's success and growth. Engaging stakeholders early in the process and demonstrating the potential benefits of selected processes can lead to greater acceptance and collaboration. 


Preventing Cyber Attacks Outweighs Cure

"A belief, that it is ok to compromise security for perceived convenience, is counter intuitive. There are few things more inconvenient than having to rebuild a person's identity or try to run a hospital or airport without the systems on which we now depend. Governments must invest resources to roll out defence grade preventive mechanisms and build the cyber security infrastructures that underpin zero trust networks. Indeed, it is widely accepted that identity centric security is the bedrock to Zero Trust Architecture. "It is important to acknowledge the release of the Australian Government's Cyber Strategy, efforts to uplift critical infrastructure standards and progress coordinating a Country wide digital identity framework. I also welcome the ambitious target to embed a zero-trust culture across the Australian Public Service to become a global cyber leader by 2030. "It is also intended to achieve a consistency in cyber security standards across government, industry, and jurisdictions. I commend the Australian Government for taking the initial steps to strengthen legislation and mandate the reporting of incidents. 


Here's why RISC-V is so important

RISC-V is quietly enabling a divergence in custom hardware for domain-specific applications, by providing an easy (or at least easier) pathway for businesses and academics to build their own versions of hardware when off-the-shelf options aren't suitable. This works in tandem with the wide range of full open-source RISC-V implementations on the market. Businesses may be able to take an existing open-source implementation of RISC-V (effectively a design for a complete processor core, usually written in a dedicated language like Verilog/SystemVerilog) and make modifications to suit their specific use case. This can involve dropping aspects that aren't needed, and adding pre-bundled supporting elements to the core which may even be off-the-shelf elements. This means that where previously it wouldn't have been practical or affordable to build specific hardware for a feature, it's now more broadly possible. Companies small and large are already utilizing this technology. It's difficult to understand if companies are designing cores from the ground up or using pre-constructed designs, but custom RISC-V silicon has already made its way into the market.


Can Generative AI Help With Quicker Threat Detections?

ChatGPT can help to some extent with security operations. Microsoft Security Copilot can reduce the load on SOCs. Tools such as Dropzone - an autonomous SOC analysis platform - can look at a phishing alert and take responsive action, with no code, and you don't have to write any playbooks for this. It just analyzes [the threat] and takes the required action. That class of tool is where organizations are going to be able to scale. From a people standpoint, organizations are having trouble hiring or retaining SOC personnel. These tools are going to take a lot of that load off the people and allow them to focus on more important things. ... Organizations are crafting generative-AI-acceptable use policies. All their employees have to read and sign it. Some organizations are taking it a step further and trying to provide training, just as companies have an annual, basic cyber awareness course. When I ask vendors about training, they either make generative AI training part of cyber training or have separate training. People take the policy, they read it and then they have the training so they understand what's expected of it.


The Importance Of Proactive & Empathetic Leadership Amidst A Changing Talent

LandscapeEmpathetic leadership demonstrates the company’s mission and values in action. It starts with the authenticity of who you are as a leader and sharing your own vulnerability. In vulnerability, you are able to empathize to get on someone’s level. You can’t do that if you’re leading by force or in a top-down manner. Empathetic leaders see their employees as people with lives. They trust they will do their jobs. They acknowledge that sometimes people have a tough year or two. These two attributes can—and must—coexist! For leaders, a clear place to start is being proactive in how you develop and implement policies. It’s about creating policies for the company you want to be, not just where you are now. When I took on a leadership position at a startup, I became the first pregnant person at the company. There were no parental leave policies or maternal care benefits in place – these policies needed to be created retroactively around me. 


European police chiefs target E2EE in latest demand for ‘lawful access’

“Tech companies are putting a lot of the information on end-to-end encryption. We have no problem with encryption; I’ve got a responsibility to try and protect the public from cybercrime, too — so strong encryption is a good thing — but what we need is for the companies to still be able to pass us the information we need to keep the public safe.” Currently, as a result of being able to scan messages that aren’t encrypted, platforms are sending tens of millions of child safety-related reports a year to police forces around the world, Biggar said — adding a further claim that “on the back of that information, we typically safeguard 1,200 children a month and arrest 800 people.” The implication here is that those reports will dry up if Meta continues expanding its use of E2EE to Instagram. Pointing out that Meta-owned WhatsApp has had the gold standard encryption as its default for years, Robinson wondered if this wasn’t a case of the crime agency trying to close the stable door after the horse has bolted. He got no straight answer to that — just more head-scratching equivocation.


The dawn of intelligent and automated data orchestration

Moving the data from one vendor’s storage type to another, or to a different location or cloud, involves creating a new copy of both the file system metadata and the actual file essence. This proliferation of file copies and the complexity needed to initiate copy management across silos interrupts user access and inhibits IT modernization and consolidation use cases. This reality also impacts data protection, which may become fragmented across the silos. ... It also creates economic inefficiencies when multiple redundant copies of data are created, or when idle data gets stuck on expensive high-performance storage systems when it would be better managed elsewhere. What is needed is a way to provide users and applications with seamless multi-protocol access to all their data, which is often fragmented across multiple vendor storage silos, including across multiple sites and cloud providers. In addition to global user access, IT administrators need to be able to automate cross-platform data services for workflow management, data protection, tiering, etc., but do so without interrupting users or applications.



Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr

Daily Tech Digest - April 22, 2024

AI governance for cloud cost optimisation: Best practices in managing expenses in general AI deployment

AI-enabled cost optimization solutions, in contrast to static, threshold-based tools can actively detect and eliminate idle and underused resources, resulting in significant cost reductions. In addition, they are equipped to foresee and avert possible problems like lack of resources and performance difficulties, guaranteeing continuous and seamless operations. They can also recognize cost anomalies, swiftly respond to them, and even carry out pre-planned actions. This method eliminates the need for continual manual intervention while ensuring effective operations. ... Cloud misconfiguration or improper utilization of cloud resources are frequently the cause of computing surges. One possible scenario is when a worker uses a resource more frequently than necessary. By determining the root cause, organizations can contribute to reducing unnecessary cloud resource utilization. By employing AI-enabled cost optimization tools, businesses can improve consumption and identify irregularities to minimize overspending. Moreover, these tools reduce the time-consuming effort of manually screening and evaluating behavior. 


The first steps of establishing your cloud security strategy

The purpose of CIS Control 3 is to help you create processes for protecting your data in the cloud. Consumers don’t always know that they’re responsible for cloud data security, which means they might not have adequate controls in place. For instance, without proper visibility, cloud consumers might be unaware that they’re leaking their data for weeks, months, or even years. CIS Control 3 walks you through how to close this gap by identifying, classifying, securely handling, retaining, and disposing of your cloud-based data, as shown in the screenshot below. ... In addition to protecting your cloud-based data, you need to manage your cloud application security in accordance with CIS Control 16. Your responsibility in this area applies to applications developed by your in-house teams and acquired from external product vendors. To prevent, detect, and remediate vulnerabilities in your cloud-based applications, you need a comprehensive program that brings together people, processes, and technology. Continuous Vulnerability Management, as discussed in CIS Control 7, sits at the heart of this program. 


Want to become a successful data professional? Do these 5 things

"I think tech can be quite scary from the outside, but no one knows everything in technology," she says. Young professionals will quickly learn everyone has gaps in their digital knowledge base -- and that's a good thing because people in IT want to learn more. "That's the brilliant thing about it. Even if you're an expert in one thing, you'll know next to nothing in a different part." Whitcomb says the key to success for new graduates is seeing every obstacle as an opportunity. "Going in from square one is quite intimidating," she says. "But if you have that mindset of, 'I want to learn, I'm willing to learn, and I can think logically' then you'll be great. So, don't be put off because you don't know how to code at the start." ... The prevalence of data across modern business processes means interest in technology has peaked during the past few years. However, some young people might still see technology as a dry and stuffy career -- and Whitcomb says that's a misconception. "That always bugs me a little bit. I think IT is incredibly creative. The things you can do with tech are amazing," she says.


Is Scotland emerging as the next big data center market?

From a sustainability perspective, Scotland is second to none. While the weather may have a miserable reputation, it is nonetheless ideal for data centers. A cooler climate means we can rely more on nature to keep equipment at optimum temperatures, with less of a need for energy-intensive air conditioning. Scotland’s energy mix also consists of a much higher than average share of renewables. The carbon intensity – a measure of how clean electricity is – of Scotland’s grid is well ahead of the other European countries and even compares favorably against other parts of the UK. Relocating a data center from Warsaw to Glasgow could cut the carbon intensity of its energy by as much as 99 percent. Scotland’s carbon intensity is one-quarter of London’s, meaning moving a 200-rack facility could save more than six million kilograms of CO2 equivalent, equal to over 14 million miles traveled by the average mid-sized car. ... There is a strong cost imperative too. Relocating to Scotland could save up to 70 percent in operational costs compared to other markets, partially thanks to the cooler climate. The cost of land is another major factor – data center-ready land in Glasgow can cost up to 90 percent less than Slough, Greater London.


Winning Gen AI Race with Your Custom Data Strategy

Essentially, the Data Fabric Strategy involves a comprehensive plan to seamlessly integrate diverse data sources, processing capabilities, and AI algorithms to enable the creation, training, and deployment of generative AI models. It provides a unified platform approach for the Collection of Data, Organizing the data, and Allowing good Governance over data, facilitating the development of winning AI Products. The Product Manager establishes the North Star Metrics (NSM) for the product according to the business context, with the most prevalent and crucial NSM being User Experience, contingent upon three pivotal factors. ... Embarking on the journey of implementing a Data Fabric Strategy, the pinnacle stage lies in sculpting the Solution Architecture tailored for Gen AI product. While the accountability rests with the Product Manager, the creation of this vital blueprint falls under the purview of the Architect. In dissecting the intricacies of Data Fabric solutions, we encounter two fundamental components: the user-facing interactions and the robust Data Processing Pipeline.


Disciplined entrepreneurship: 6 questions for startup success

Identify key assumptions to be tested before you begin to make heavy investments in product development. It will be faster and much less costly now to test the assumptions and allow you to preserve valuable resources and adjust as needed. Test each of the individual assumptions you have identified. This scientific approach will allow you to understand which assumptions are valid and which ones are not and then adjust when the cost of doing so is much lower and can be done much faster. Define the minimal product you can use to start the iterative customer feedback loop —where the customer gets value from the product and pays for it. You must reduce the variables in the equation to get the customer feedback loop started with the highest possibility of success with simultaneously the most efficient use of your scarce resources. ... Calculate the annual revenues from the top follow-on markets after you are successful in your beachhead market. It shows the potential that can come from winning your beachhead and motivates you to do so quickly and effectively.


7 innovative ways to use low-code tools and platforms

One of the 7 Rs of cloud app modernization is to replatform components rather than lift and shift entire applications. One replatforming approach is maintaining back-end databases and services while using low-code platforms to rebuild front-end user experiences. This strategy can also enable the development of multiple user experiences for different business purposes, a common practice performed by independent software vendors (ISVs) who build one capability and tailor it to multiple customers. Deepak Anupalli, cofounder and CTO of WaveMaker, says, “ISVs recast their product UX while retaining all their past investment in infrastructure, back-end microservices, and APIs. ... Another area of innovation to consider is when low-code components can replace in-house commodity components. Building a rudimentary register, login, and password reset capability is simple, but today’s security requirements and user expectations demand more robust implementations. Low-code is one way to upgrade these non-differentiating capabilities without investing in engineering efforts to get the required experience, security, and scalability.


Beyond 24/7: How Smart CISOs are Rethinking Threat Hunting

To combat this phenomenon, CISOs are rethinking their approach as the model of 24/7 in-house threat hunting is no longer sustainable for many businesses. Instead, we see an increasing focus on value-driven security solutions that make their own tools work better, harder, and more harmoniously together. This means prioritizing tools that leverage telemetry, deliver actionable insights and integrate into existing stacks seamlessly – and don’t just create another source of noise. This is where Managed Detection and Response (MDR) services come in. Managed Detection and Response (MDR) services offer a strategic solution to these challenges. MDR providers employ experienced security analysts who monitor your environment 24/7, leveraging advanced threat detection and analysis tools and techniques. ... Start by evaluating your current security posture. Identify your organization’s specific security needs and vulnerabilities. This helps you understand how MDR can benefit you and what features are most important. Don’t be swayed by brand recognition alone. While established players offer strong solutions, smaller MDR providers can be equally adept, often with greater flexibility and potentially lower costs.


A time to act: Why Indian businesses need to adopt AI agents now

Currently, the conversational AI chatbots in the market listen to what businesses want and deliver exactly that. This is only one aspect of what generative AI can do. AI Agents take it a step further by performing the same functions as conversational AI bots but with the added capability of acting intuitively. For example, while planning a vacation it can complete an expense report without being asked or plan a travel itinerary, book tickets and more. However, there is more than just one use case and these assistants can be used across industries. ...  Imagine a scenario where an AI Agent is deployed in tandem with the IoT device monitoring signals that indicate when a machine needs to replace parts. AI Agents can be used to automatically order parts, and have them shipped to specific locations along with recommended times for the technician to arrive for the maintenance work. All of this with no downtime. This is merely scratching the surface of what AI Agents can do. Built on Large Actions Models, AI Agents can automate a wide range of tasks across various industries. 


6 security items that should be in every AI acceptable use policy

Corporate policies need to include a security item that deals with protecting the sensitive data that the AI system uses. By including a security item that addresses how the AI system uses sensitive data, organizations can promote transparency, accountability, and trust in their AI practices while safeguarding individuals’ privacy and rights. “So if an AI system is being used to assess whether somebody is going to be getting insurance, or healthcare, or a job, that information needs to be used carefully,” says Nader Henein, research vice president, privacy and data protection at Gartner. Companies need to ask what information is going to be given to those AI systems, and what kind of care is going to be taken when they use that data to make sensitive decisions, Henein says. The AI AUP needs to establish protocols for handling sensitive data to safeguard privacy, comply with regulations, manage risks, and maintain trust with users and others. These protocols ensure that sensitive data, such as personal information or proprietary business data, is protected from unauthorized access or misuse.



Quote for the day:

''Your most unhappy customers are your greatest source of learning.'' -- Bill Gates

Daily Tech Digest - April 19, 2024

Cloud cost management is not working

The Forrester report illuminates significant visibility challenges when using existing CCMO tools. Tracking expenses across different cloud activities, such as data management, egress charges, and application integration, remains a challenge. Finops is normally on the radar, but these enterprises have yet to adopt useful finops practices, with most programs either nonexistent or not yet off the ground, even if funded. Then there’s the fact that enterprises are not good at using these tools yet, and they seem to add more cost with little benefit. The assumption is that they will get better and costs will get under control. However, given the additional resource needs for AI deployments, improvements are not likely to occur for years. At the same time, there is no plan to provide IT with additional funding, and many companies are attempting to hold the line on spending. Despite these challenges, getting cloud spending under control continues to be a priority, even if results do not show that. This means major fixing needs to be done at the architecture and integration level, which most in IT view as overly complex and too expensive to fix. 


Why Selecting the Appropriate Data Governance Operating Model Is Crucial

When deciding on the data governance operating model, you cannot simply pick one approach without evaluating the benefits each one offers. You need to weigh the potential benefits of centralized and decentralized governance models before making a decision. If you find that the benefits of centralizing your governance operations exceed those of a decentralized model by at least 20%, then it’s best to centralize. With a centralized governance model, you can bridge the skills gap, enjoy consistent outcomes across all business units, easily report on operations, ensure executive buy-in at the C-level, and plan for effectiveness in continuous feedback elicitation, improvements, and change management. However, the downside is that it often leads to operation rigidity, which reduces motivation among mid-level managers, and bureaucracy often outweighs the benefits. It’s important to consider socio-cultural aspects when formulating your operating model, as they can significantly influence the success of your organization.


5 Steps Toward Military-Grade API Security

When evaluating client security, you must address environment-specific threats. In the browser, military grade starts with ensuring the best protections against token theft, where malicious JavaScript threats, also known as cross-site scripting (XSS), are the biggest concern. To reduce the impact of an XSS exploit, it is recommended to use the latest and most secure HTTP-only SameSite cookies to transport OAuth tokens to your APIs. Use a backend-for-frontend (BFF) component to issue cookies to JavaScript apps. The BFF should also use a client credential when getting access tokens. ... A utility API then does the cookie issuing on behalf of its SPA without adversely affecting your web architecture. In an OAuth architecture, clients obtain access tokens by running an OAuth flow. To authenticate users, a client uses the OpenID Connect standard and runs a code flow. The client sends request parameters to the authorization server and receives response parameters. However, these parameters can potentially be tampered with. For example, an attacker might replay a request and change the scope value in an attempt to escalate privileges.


Break Security Burnout: Combining Leadership With Neuroscience

The problem for cybersecurity pros is that they often get stuck in a constant state of psychological fight-or-flight response pattern due to the constant stress cycle of their jobs, Coroneos explains. iRest is a training that helps them switch out of this cycle to bring them to a deeper state of relaxation to reset that fight-or-flight response. This will help the brain switch off, so it is not constantly creating stress not only in the workplace but throughout their everyday lives, thus creating burnout, he says. "We need to get them into a position where they can come into a proper relationship into their subconscious," Coroneos says, adding that so far cybersecurity professionals who have experienced the training — which Cybermindz is currently piloting— report they are sleeping better and making clearer decisions after only a few sessions of the program. Indeed, while burnout remains a serious problem, the message Coroneos and Williams ultimately want to convey is one of hope that there are solutions to solve the burnout problem currently facing cybersecurity professionals, and that the enormous pressures these dedicated professionals face is not being overlooked.


Unlocking Customer Experience: The Critical Role of Your Supply Chain

It is crucial to find a partner that understands that digital transformation alone is not enough. Unlike point solution vendors who solve isolated problems, prioritize a partner that focuses on three main areas: people, processes, and systems. A good partner will begin its approach by understanding what is actually happening with mission-critical processes in the supply chain like inbound and outbound logistics, supplier management, customer service, help desk, and financial processes. Understanding these root causes helps identify opportunities for improvement and automation. Analyzing data and feedback reveals pain points, bottlenecks, and inefficiencies within each process. Utilizing process mapping and performance metrics helps pinpoint areas ripe for enhancement. Automation technologies, like AI and machine learning, streamline repetitive tasks, reducing errors and enhancing efficiency. By continuously assessing and optimizing these processes, businesses can improve responsiveness, reduce costs, and enhance overall supply chain performance, ultimately driving customer satisfaction and competitive advantage.


AI migration woes: Brain drain threatens India’s tech future

To address the challenge of talent migration, the biggest companies in India must work together to democratise access to resources and opportunities within its tech ecosystem. One key aspect of this approach involves fostering a culture of open collaboration among key stakeholders, including top-tier venture capitalists (VCs), corporates, academia and leading startups because no single entity can drive AI innovation in isolation. By creating a collaborative ecosystem where information is freely shared and resources are pooled, can level the playing field and provide equal opportunities for aspiring AI professionals across the nation. This could involve the establishment of platforms dedicated to knowledge exchange, networking events and cross-sector partnerships aimed towards accelerating innovation. ... In addition to these fundamental elements, the tech ecosystem in India must also prioritise accessibility and affordability in the adoption of AI-integrated technologies. The future-ready benefits of AI should be democratised, reaching not only large brands but also small and medium-sized enterprises (SMEs), startups and grassroots organisations. 


Are you a toxic cybersecurity boss? How to be a better CISO

Though most CISOs treat their employees fairly, CISOs are human beings — with all the frailties, quirks, and imperfections of the human condition. But CISOs behaving badly expose their own organizations to huge risks. ... One of the thorniest challenges of a toxic CISO is that the person causing the problem is also the one in charge, making them susceptible to blind spots about their own behavior. Nicole L. Turner, a specialist in workplace culture and leadership coaching, got a close-up look at this type of myopia when a top exec (in a non-security role) recently hired her to deliver leadership training to the department heads at his company. “He felt like they needed training because he could tell some things were going on with them, that they were burned out and overwhelmed. But as I’m training them, I notice these sidebar conversations [among his staff] that he was the problem, more so than the work itself. It was just such an ironic thing and he didn’t know,” recounts Turner, owner and chief culture officer at Nicole L. Turner Consulting in Washington, D.C. There’s also some truth to the adage that it’s lonely at the top, especially in a hypercompetitive corporate environment.


Who owns customer identity?

Onboarding users securely but still seamlessly is a constant conflict in many types of businesses, from retail, insurance, to fintech. ... If you are from a regulated industry, MFA becomes important. Make it a risk-based MFA, however, to reduce undue friction. If your business offers a D2C or B2C product or service, seamless onboarding is your number one priority. If user friction is the primary reason for your CIAM initiative, the product team or engineering team should take the lead and bring other teams along. If MFA is the main use case, the CISO should lead the discussions and then bring other teams along. ... If testing or piloting is possible, do so. Experimentation is very valuable in a CIAM context. Whether you are moving to a new CIAM solution, trying a new auth method, or changing your onboarding process in any other way, run a pilot or an A/B test first. Starting small, measuring the results, and taking longer-term decisions accordingly is a healthy cycle to follow when it comes to customer identity processes.


GenAI: A New Headache for SaaS Security Teams

The GenAI revolution, whose risks remain in the realm of the unknown unknown, comes at a time when the focus on perimeter protection is becoming increasingly outdated. Threat actors today are increasingly focused on the weakest links within organizations, such as human identities, non-human identities, and misconfigurations in SaaS applications. Nation-state threat actors have recently used tactics such as brute-force password sprays and phishing to successfully deliver malware and ransomware, as well as carry out other malicious attacks on SaaS applications. Complicating efforts to secure SaaS applications, the lines between work and personal life are now blurred when it comes to the use of devices in the hybrid work model. With the temptations that come with the power of GenAI, it will become impossible to stop employees from using the technology, whether sanctioned or not. The rapid uptake of GenAI in the workforce should, therefore, be a wake-up call for organizations to reevaluate whether they have the security tools to handle the next generation of SaaS security threats.


What CIOs Can Learn from an Attempted Deepfake Call

Defending against deepfake threats takes a multifaceted strategy. “There's a three-pronged approach where there's education, there's culture, and there's technology,” says Kosak. NINJIO focuses on educating people on cybersecurity risks, like deepfakes, with short, engaging videos. “If you can deepfake a voice and a face or an image based on just a little bit of information or maybe three to four seconds of that voice tone, that's sending us down a path that is going to require a ton of verification and discipline from the individual’s perspective,” says McAlmont. He argues that an hour or two of annual training is insufficient as threats continue to escalate. More frequent training can help increase employee vigilance and build that culture of talking about cybersecurity concerns. When it comes to training around deepfakes, awareness is key. These threats will continue to come. What does a deepfake sound or look like? (Pretty convincing in many cases.) What are some of the common signs that the person you hear or see isn’t who they say they are?



Quote for the day:

“A real entrepreneur is somebody who has no safety net underneath them.” -- Henry Kravis