Daily Tech Digest - December 03, 2023

The need to upskill India’s tech talent is critical. Why? Because India has perhaps the most to gain or lose when it comes to the impact of Generative AI. A survey by ServiceNow found that India faces a critical need to upskill 1.62 crore workers in AI and automation, creating 4.7 million new jobs in technology by 2027 to meet the nation’s skill deficit. ... Now, the question is, can Generative AI help train such a large young population? Yes! This technology can create personalized learning paths. With modules integrated with AI to optimize outcomes, students can learn better with real-time feedback and take advantage of a more customized learning experience. If India has to become a global economic superpower, engineers must become tech-agnostic and adaptable in a world that’s changing fast. A Generative AI layer must be integrated into their education modules. This will equip them with cutting-edge skills and ensure versatility – from software developers to prompt engineers, enabling them to navigate diverse technological domains.


5 Ways To Demonstrate Leadership Skills In A Team Meeting

Don't just speak to be heard; speak to provide tangible solutions. Leaders are always thinking about innovative ways to drive forward positive business outcomes, and it's your responsibility in these meetings to think of creative solutions to the challenges your team is facing. Even if you don't have the complete solution yet, make some recommendations with a "What if we tried XYZ approach?" This engages other team members and shows that you are confident with sharing your ideas, while simultaneously ensuring everyone is included and feels heard—a mark of a true leader. ... One of the most difficult and embarrassing situations any professional could be placed in is to acknowledge when they've made a mistake or accidentally jeopardized the success of the team project. But it's the bravest thing to do, and it's an essential quality of a rising leader to take ownership for your mistakes. Refuse to cast blame on others or talk behind your colleagues' back, because this can destroy trust. Instead seek ways to rectify the situation and actively discuss solutions.


How can AI and advanced analytics streamline due diligence processes in financial industries

In an era of increasing digital transactions, customer due diligence (CDD) demands robust identity verification processes. AI brings biometric data, document analysis, and identity validation methods to the forefront, enhancing the accuracy and speed of customer due diligence. OCR, face match, liveness detection, match logic, and digital address verification facilitate contactless KYC and paperless onboarding. These technologies not only streamline onboarding processes but also contribute to a more secure and fraud-resistant financial ecosystem. Staying compliant with an ever-changing regulatory landscape is a perpetual challenge for financial institutions. AI provides a dynamic solution by automating the monitoring and adaptation to regulatory changes. Leveraging data analytics to best utilise and parse alternate data sources, such as utility bills, financial account data, etc., can help further track customer behaviour while empowering the team to identify discrepancies and stay compliant. From anti-money laundering (AML) to know-your-customer (KYC) tech, AI ensures that due diligence processes remain effective and consistently aligned with the latest regulatory standards. 


The World Depends on 60-Year-Old Code No One Knows Anymore

The problem is that very few people are interested in learning COBOL these days. Coding it is cumbersome, it reads like an English lesson (too much typing), the coding format is meticulous and inflexible, and it takes far longer to compile than its competitors. And since nobody's learning it anymore, programmers who can work with and maintain all that code are a increasingly hard to find. Many of these "COBOL cowboys" are aging out of the workforce, and replacements are in short supply. ... If it proves successful, the watsonx code assistant could have huge implications for the future, but not everyone is convinced it's a silver bullet that IBM says it is. Many who remember IBM’s previous AI experiment, Watson Health, are hesitant to trust another big AI project from the company because the previous one failed so miserably and didn't deliver on its high-flying promises. Gartner Distinguished Vice President and Analyst, Arun Chandrasekara is also skeptical because “IBM has no case studies, at this time, to validate its claims,” he says. 


Tech Works: How to Build a Life-Long Career in Engineering

As Hightower put it, “You get to move as fast as you’re willing to believe that you can. You identify a problem and you execute it.” Aim to be agile in mindset and practice as long as you can, both with your organization and with your own career. Nothing is precious. Always look for opportunities to learn. If you get stuck with one language or framework, it limits where you can move and also your ability to change. It may even have you wasting time rebuilding things in your framework of choice — like Hightower acknowledged he used to do with Python. ... IT is a massive cost center that often demands an explanation from an organization’s budget makers, especially with a recession looming. An underappreciated benefit of platform engineering, for instance, is that it can enable a conversation between the tech side and the business side. Developers and engineers benefit from this conversation. They feel a deeper sense of purpose when their work is more closely connected to business goals. That means, especially in a time of increased automation, storytelling is of great value. To act as a translator and context giver can help boost an engineer’s value.


Bridging the gap between cloud vs on-premise security

Cloud-native security architectures like SASE and SSE can offer the east-west protection typically delivered by a data center firewall by rerouting all internal traffic through the closest point of presence (PoP). Unlike a local firewall that comes with its own configuration and management constraints, firewall policies configured in the SSE PoP can be managed via the platform’s centralized management console. ... As security functions move increasingly to the cloud, it’s crucial not to lose sight of the controls and security measures needed on-site. Cloud-native protections aim to increase coverage while reducing complexities and boosting convergence. As critical as it is to enable east-west traffic protection within SASE and SSE architectures, it’s equally important to maintain the unified visibility, control, and management offered by such platforms. To achieve this, organizations must avoid getting carried away by emerging threats and adding back disparate security solutions. 


5 tweaks every developer should make in Windows 11

The new Windows Terminal is nothing short of fantastic. It's a night-and-day improvement that allows you to run Powershell, cmd, and WSL sessions within one window. It's customizable, has great tab support, and is even open-source. It's got a similar JSON configuration for settings as VSCode, which is well worth exploring, and the inbuilt GUI menus allow you to set your default shell among a range of other things. The new terminal also supports side-by-side windows or split panes, and background opacity settings. ... While Microsoft has made great strides in recent years trying to win back developers, the Windows file system has often been a pain point. Developers have long been used to a Linux/Unix file system, where managing and creating thousands of small files for dependencies is of trivial impact on the overall system performance, and many common tools have been built with this in mind. NTFS has already been known to suffer from a performance gap with the defacto Linux standard ext4, and Windows Defender's real-time protection can slow this down even further. 


Data Observability: Reliability in the AI Era

Data observability is characterized by the ability to accelerate root cause analysis across data, system, and code and to proactively set data health SLAs across the organization, domain, and data product levels. ... Data engineers are going to be building more pipelines faster (thanks Gen AI!) and tech debt is going to be accumulating right alongside it. That means degraded query, DAG, and dbt model performance. Slow running data pipelines cost more, are less reliable, and deliver poor data consumer experience. That won’t cut it in the AI era when data is needed as soon as possible. Especially not when the economy is forcing everyone to take a judicious approach with expense. That means pipelines need to be optimized and monitored for performance. Data observability has to cater for it. ... This will shock no one who has been in the data engineering or machine learning space for the last few years, but LLMs perform better in areas where the data is well-defined, structured, and accurate. Not to mention, there are few enterprise problems to be solved that don’t require at least some context of the enterprise. 


Top 9 Cybersecurity Trends in 2024

Looking to the future of cybersecurity, companies will need to implement new cyber defenses to combat info stealer malware, he adds. Organizations should seek comprehensive malware remediation strategies to neutralize the stolen data before it’s used for other cyber incidents. “Session cookies, passwords, and APIs can remain active for weeks or months after they were initially stolen, leaving organizations vulnerable to follow-up or repeat attacks using the same data,” according to Hilligoss. “A holistic post-infection remediation plan that includes monitoring the dark web for malware-stolen data allows enterprises to invalidate any compromised sessions and patch vulnerabilities before criminals use the information to cause harm.” ... In addition, the SEC charges against the SolarWinds chief information security officer (CISO) will change that role in 2024, according to Thomas Kinsella, co-founder and chief customer officer at Tines, a security workflow automation company. The SEC’s decision means more cybersecurity issues will escalate to boardroom issues as CISOs force the entire company to accept the risk rather than shouldering it alone.


Turmoil at OpenAI shows we must address whether AI developers can regulate themselves

In the background, there have been reports of vigorous debates within OpenAI regarding AI safety. This not only highlights the complexities of managing a cutting-edge tech company, but also serves as a microcosm for broader debates surrounding the regulation and safe development of AI technologies. Large language models (LLMs) are at the heart of these discussions. LLMs, the technology behind AI chatbots such as ChatGPT, are exposed to vast sets of data that help them improve what they do – a process called training. However, the double-edged nature of this training process raises critical questions about fairness, privacy, and the potential misuse of AI. Training data reflects both the richness and biases of the information available. The biases may reflect unjust social concepts and lead to serious discrimination, the marginalising of vulnerable groups, or the incitement of hatred or violence. Training datasets can be influenced by historical biases. 



Quote for the day:

“The road to success and the road to failure are almost exactly the same.” -- Colin R. Davis

Daily Tech Digest - December 02, 2023

Infrastructure as Code and Security – Five Ways to Improve Your Approach

Implementing IaC security processes will affect how teams work. For security, it is another set of images that have to be tracked for potential vulnerabilities, and changes flagged for remediation. For developers, these changes can be another set of work alongside requests from the business and other fixes that are needed. However, this can easily become a problem for developers. Having to learn another set of tools to track issues or find the list of problems will affect how developers work, making it harder to get issues fixed. To solve this problem, security can integrate into developer workflows and the tools that they use every day. Developers can automate security scans using APIs from within their developer environments and integrate with the code editors, Git repositories, and CI/CD tools to provide early visibility. These results can then be fed into the developer workflow, flagging potential issues that need to be fixed alongside other requests for work. Rather than being a separate stream of work that developers have to consciously engage with, security fixes to IaC should be treated just the same as other tasks.


Deconstructing Telemetry Pipelines: Streamlining Data Management For The Future

The primary objectives of telemetry pipelines are to reduce data clutter, add context and save resources. Good data pipelines build multiple views into the pipeline, organizing and contextualizing data from the get-go. By organizing and labeling data, these pipelines make it easier to extract valuable information from a single source of truth rather than combing through scattered data puddles. Contextualization is the key. It involves tagging data with labels, making it easier to group, filter and analyze as it moves through the system. ... One of the significant issues that telemetry pipelines address is the growing complexity of data management and the challenges posed by tool sprawl. As more data sources are added to an organization's infrastructure and tools multiply, the complexity becomes exponential. Managing this complexity, validating assumptions and keeping data silos in check can become a significant burden. When you’re not indexing correctly, you’re just paying for extra toil. Telemetry data can come from various sources, not just machine data. This could include things like sentiment analysis or even insights into how often somebody uses particular buttons in their smart car. 


8 change management questions every IT leader must answer

In the digital transformation era, IT success is no longer defined by meeting a go-live date or keeping within a budget. It is determined by the creation of shared vision and goals; achievement of leadership engagement and alignment; broad buy-in and adoption of new systems, platforms, and processes; and realization of business outcomes. ... CIOs must view change management as a kind of GPS for transformation initiatives, designed to keep them on track from the get-go. “If the change rationale isn’t clearly established and communicated at the start, the whole initiative will be an uphill battle,” says Jeanine L. Charlton, senior vice president and chief technology and digital officer at Merchant’s Fleet, where she has been leading the charge to rethink the way the 60-year-old fleet management firm operates. As IT leaders ask their organizations to think differently about the way they do things and adopt radically different alternatives, they must also rethink their own approach to ushering in these changes. 


Thinking in Systems: A Sociotechnical Approach to DevOps

The late philosopher Bernard Stiegler argued that technology is constitutive of human cognition — that is, our use of tools and technology fundamentally shapes our minds and our understanding of the world. That means that adopting better tools improves the ways we think and work. Specifically, tools that map the dizzying array of inputs and outputs within the organization help us to reason through value chains and where we fit within it. This new breed of tools uses directed acyclic graphs to capture your software infrastructure, microservices, tests and jobs. Imagine you are a software developer in a big organization that has thousands of developers. Your team owns an API that controls the flow of widgets through time and space. Your API is used by dozens of other teams, but you lack a sense of greater value, of place in the overall system. How does your API result in business value? What are its upstream and downstream dependencies? If your API were to disappear tomorrow, what consequence would it have on the business? Tools like Garden capture a map of value for software so teams like yours can, at any time, view their part of the whole. 


The evolution of multitenancy for cloud computing

The evolution of multitenancy in public cloud computing services will be driven by advancements in container orchestration, edge computing, and artificial intelligence. These technologies will further enhance the capabilities of multitenant environments, such as using an edge system to allocate some of the processing that is tasked to a multitenant architecture, or leveraging AI to direct allocation. This will be much better than the simple algorithms most are using today. As the complexity of client requirements grows, public cloud providers will continue to invest in refining multitenancy approaches. I suspect this will mean focusing on workload isolation, data governance, and compliance management within shared infrastructures. Also, the convergence of multitenancy with hybrid and multicloud architectures will soon be a thing, even though multicloud is already common. The idea will be to offer seamless integration and interoperability across cloud environments, supporting the notion of heterogeneity at the multitenant level and not at the application and data levels, which is how things are done now.


Top 10 Software Architecture Patterns to Follow in 2024

Serverless architecture reduces the requirement for managing servers. It allows developers to focus solely on writing code while cloud providers manage the infrastructure. Serverless functions are event-driven and executed in response to specific events or triggers. Serverless architectures offer automatic scaling, reduced operational overhead, and cost efficiency. Developers can focus on writing code without worrying about server provisioning or maintenance. ... Event Sourcing is a pattern where the state of an application is determined by a sequence of events rather than the current state. Events are stored, and the application's state can be reconstructed by replaying these events. Event Sourcing provides a full audit trail of changes, enables temporal queries, and supports advanced analytics. It is useful in scenarios where historical data tracking is crucial. ... Event-driven architecture is centered around the concept of events, where components communicate by producing and consuming events. Events represent meaningful occurrences within a system and can trigger actions in other parts of the application.


Data Management and Consolidation in the Integration of Corporate Information Systems

In the realm of corporate information systems, integration serves a crucial role in improving how we handle and oversee data. This process involves merging data from diverse sources into a single, coherent system, ensuring that all users have access to the same, up-to-date information. The end goal is to maintain data that is both accurate and consistent, which is essential for making informed decisions. This task, known as data management and consolidation, is not just about bringing data together; it's about ensuring the data is reliable, readily available, and structured in a way that supports the company's operations and strategic objectives. By consolidating data, we aim to eradicate inconsistencies and redundancies, which not only enhances the integrity of the data but also streamlines workflows and analytics. It lays the groundwork for advanced data utilization techniques such as predictive analytics, machine learning models, and real-time decision-making support systems. Effective data management and consolidation require the implementation of robust ETL processes, middleware solutions like ESBs, and modern data platforms that support Event-driven architectures.


Decoding The Taj Hotels’ Data Breach And India’s Growing Cybersecurity Battle

In simple terms, a social engineering attack uses psychological manipulation to get access to sensitive data via human interactions. “Hackers research LinkedIn and launch targeted attacks. They gather information about employees and third-party contractors connected with the target organisation and send phishing emails. All they need is an unsuspecting employee or a contractor clicking the link. Then, a variety of innovative social engineering actions follow, leading to APTs. The hackers end up harvesting credentials to gain access to systems and applications,” Venkatramani explained. In the hospitality sector, cyberattacks are predominantly fuelled by a lack of password security hygiene, which encompasses issues such as inadequate credential management, widespread password reuse across various IT assets, insufficient controls on access authorisation, insecure sharing methods like phone calls, neglect of embedded credentials in development environments, and the disregard for essential practices like robust password creation and regular rotation, Venkatramani added.


Website spoofing: risks, threats, and mitigation strategies for CIOs

Protecting the website and preventing users from falling prey to website spoofing scams requires a multilayered approach whereby various methods and procedures must be employed. Any points of vulnerability on the website must be identified. The organization’s employees must be educated, raising their awareness of scams like phishing attacks and brand impersonation so they remain vigilant about potential attacks. In addition, the most effective way of identifying and preventing spoofing attacks is by adopting the right solution. Compliance, software updates, resolving issues, customer support, and various other concerns will be handled as a third-party service provides these services. ... In a world where technological progress can be exploited for malicious purposes, safeguarding data emerges as the paramount goal for any organization. With the right defense methods and tools, businesses can confidently navigate the digital landscape, conducting day-to-day operations without the looming fear of falling victim to the clandestine.


2024: The Year of Wicked Problems

If humans are displaced by the implementation of advanced analytics and machine learning, which can provide equal or superior productivity, and humans cannot evolve quickly enough (or at all), is there a societal social net to fall back on? If the productivity gains centralize wealth at the top of the economic landscape, does this result in mass unemployment and extreme wealth inequality? How will this impact developed nations and developing nations? These are the wicked problems that governments around the world will have to tackle by identifying creative solutions that will have a positive impact on society. In addition, will the extreme polarization of political thinking further enhanced by advanced algorithmic information sharing and prioritization preclude us from coming together in unity to address these challenges in a timely manner? When addressing wicked problems associated with society, there are often archaic laws and regulations that hinder our ability to move fast and efficiently in the iterative design thinking approach. Can these be changed rapidly enough to enable the iterative approach to problem-solving that will allow us to address these wicked problems?



Quote for the day:

"The road to success and the road to failure are almost exactly the same." -- Colin R. Davis

Daily Tech Digest - November 30, 2023

Super apps: the next big thing for enterprise IT?

Enterprise super apps will allow employers to bundle the apps employees use under one umbrella, he said. This will create efficiency and convenience, where different departments can select only the apps they want, much like a marketplace, to customize their working experiences. Other advantages of super apps for enterprises include providing a more consistent user experience, combating app fatigue and app sprawl, and enhancing security by consolidating functions into one company-managed app. Gartner analyst Jason Wong said the analyst firm is seeing interest in super apps from organizations, including big box stores and other retailers, that have a lot of frontline workers who rely on their mobile devices to do their jobs. One company that has adopted a super app to enhance the experience of its frontline workers and other employees is TeamHealth, a leading physician practice in the US. TeamHealth is using an employee super app from MangoApps, which unifies all the tools and resources employees use daily within one central app.


Meta faces GDPR complaint over processing personal data without 'free consent'

The case centres on whether Meta can legitimately claim to have obtained free consent from its customers to process their data, as required under GDPR, when the only alternative is for customers to pay a substantial fee to opt out of ad-tracking. The complaint will be watched closely by social media companies such as TikTok, which are reported to be considering offering ad-free services to customers outside the US to meet the requirements of European data protection law. Meta denied that it was in breach of European data protection law, citing a European Court of Justice ruling in July 2023 which it said expressly recognised that a subscription model was a valid form of consent for an ad-funded service. Spokesman Matt Pollard referred to a blog post announcing Meta’s subscription model, which stated, “The option for people to purchase a subscription for no ads balances the requirements of European regulators while giving users choice and allowing Meta to continue serving all people in the EU, EEA and Switzerland”.


India’s Path to Cyber Resilience Through DevSecOps

DevSecOps, a collaborative methodology between development, security, and operations, places a strong emphasis on integrating security practices into the software development and deployment processes. In India, the approach has gained substantial traction due to several reasons, including a security-first mindset, adherence to compliance requirements and escalating cybersecurity threats. A survey revealed that the primary business driver for DevSecOps adoption is a keen focus on business agility, achieved through the rapid and frequent delivery of application capabilities, as reported by 59 per cent of the respondents. From a technological perspective, the most significant factor is the enhanced management of cybersecurity threats and challenges, a factor highlighted by 57 per cent of the participants. Businesses now understand the importance of proactive security measures. DevSecOps encourages a security-first mentality, ensuring that security is an integral part of the development process from the outset.


Cybersecurity and Burnout: The Cybersecurity Professional's Silent Enemy

In the world of cybersecurity, where digital threats are a constant, the mental health of professionals is an invaluable asset. Mindfulness not only emerges as a shield against the stress and burnout that pose security risks to organizations, but it also becomes a key strategy to reduce the costs associated with lost productivity and staff turnover. By adopting mindfulness practices and preventing burnout, cybersecurity professionals not only preserve their well-being, but also contribute to a healthier work environment, improve the responsiveness and effectiveness of cybersecurity teams, and ensure the continued success of companies in this critical technology field. Cybersecurity challenges are multidimensional. They cannot be managed in only one dimension. Mindfulness is an essential tool to keep us one step ahead. By recognizing the value of emotional well-being in the fight against cyberattacks, we can build a stronger and more sustainable defense. Cybersecurity is not only a technical issue, but also a human one, and mindfulness presents itself as a key piece in this intricate security puzzle.


Will AI replace Software Engineers?

While AI is automating some tasks previously done by devs, it’s not likely to lead to widespread job losses. In fact, AI is creating new job opportunities for software engineers with the skills and expertise to work with AI. According to a 2022 report by the McKinsey Global Institute, AI is expected to create 9 million new jobs in the United States by 2030. The jobs that are most likely to be lost to AI are those that are routine and repetitive, such as data entry and coding. However, software engineers with the skills to work with AI will be in high demand. ... Embrace AI as a tool to enhance your skills and productivity as a software engineer. While there's concern about AI replacing software engineers, it's unlikely to replace high-value developers who work on complex and innovative software. To avoid being replaced by AI, focus on building sophisticated and creative solutions. Stay up-to-date with the latest AI and software engineering developments, as this field is constantly evolving. Adapt to the changing landscape by acquiring new skills and techniques. Remember that AI and software engineering can collaborate effectively, as AI complements human skills. 


Bridging the risk exposure gap with strategies for internal auditors

Without a strategic view of the future — including a clear-eyed assessment of strengths, weaknesses, opportunities, threats, priorities, and areas of leakage — internal audit is unlikely to recognize actions needed to enable success. There is no bigger threat to organizational success than a misalignment between exponentially increasing risks and a failure to respond due to a lack of vision, resources, or initiative. Create and maintain a good, well-documented strategic plan for your internal audit function. This can help you organize your thinking, force discipline in definitions, facilitate implementation, and continue asking the right questions. Nobody knows for certain what lies ahead, and a well-developed strategic plan is a key tool for preparing for chaos and ambiguity. ... Companies may have less time than they think to prepare for compliance, and internal auditors should be supporting their organizations in getting the right enabling processes and technologies in place as soon as possible. This will require a continuing focus on breaking down silos and improving how internal audit collaborates with its risk and compliance colleagues. 


Generative AI in the Age of Zero-Trust

Enter generative AI. Generative AI models generate content, predictions, and solutions based on vast amounts of available data. They’re making waves not just for their ‘wow’ factor, but for their practical applications. It’s only natural that employees would gravitate to the latest technology offering the ability to make them more efficient. For cybersecurity, this means potential tools that offer predictive threat analysis based on patterns, provide automatic code fixes, dynamically adjust policies in response to evolving threat landscapes and even automatically respond to active attacks. If used correctly, generative AI can shoulder some of the burdens of the complexities that have built up over the course of the zero-trust era. But how can you trust generative AI if you are not in control of the data that trains it? You can’t, really. ... This is forcing organizations to start setting generative AI policies. Those that choose the zero-trust path and ban its use will only repeat the mistakes of the past. Employees will find ways around bans if it means getting their job done more efficiently. Those who harness it will make a calculated tradeoff between control and productivity that will keep them competitive in their respective markets.


Organizations Must Embrace Dynamic Honeypots to Outpace Attackers

There are a number of ways in which AI-powered honeypots are superior to their static counterparts. The first is that because they can independently evolve, they can become far more convincing through automatic evolution. This sidesteps the problem of constantly making manual adjustments to present the honeypot as a realistic facsimile. Secondly, as the AI learns and develops, it will become far more adept at planting traps for unwary attackers, meaning that hackers will not only have to go slower than usual to try and avoid said traps but once one is triggered, it will likely provide far richer data to defense teams about what attackers are clicking on, the information they’re after, how they’re moving across the site. Finally, using AI tools to design honeypots means that, under the right circumstances, even tangible assets can be turned into honeypots. ... Therefore, having tangible assets such as honeypots allows defense teams to target their energy more efficiently and enables the AI to learn faster, as there will likely be more attackers coming after a real asset than a fake one.


Almost all developers are using AI despite security concerns, survey suggests

Many developers place far too much trust in the security of code suggestions from generative AI, the report noted, despite clear evidence that these systems consistently make insecure suggestions. “The way that code is generated by generative AI coding systems like Copilot and others feels like magic," Maple said. "When code just appears and functionally works, people believe too much in the smoke and mirrors and magic because it appears so good.” Developers can also value machine output over their own talents, he continued. "There’s almost an imposter syndrome," he said. ... Because AI coding systems use reinforcement learning algorithms to improve and tune results when users accept insecure open-source components embedded in suggestions, the AI systems are more likely to label those components as secure even if this is not the case, it continued. This risks the creation of a feedback loop where developers accept insecure open-source suggestions from AI tools and then those suggestions are not scanned, poisoning not only their organization’s application code base but the recommendation systems for the AI systems themselves, it explained.


Former Uber CISO Speaks Out, After 6 Years, on Data Breach, SolarWinds

Sullivan says the key mistake he made was not bringing in third-party investigators and counsel to review how his team handled the breach. "The thing we didn't do was insist that we bring in a third party to validate all of the decisions that were made," he says. "I hate to say it, but it's more CYA." Now, Sullivan advises other CISOs and companies about navigating their responsibilities in disclosing breaches, especially as the new Securities & Exchange Commission (SEC) incident reporting requirements are set to take effect. Sullivan says he welcomes the new regulations. "I think anything that pushes towards more transparency is a good thing," he says. He recalls that when he was on former President Barack Obama's Commission on Enhancing National Cybersecurity, Sullivan was pushing to give companies immunity if they are transparent early on during security incidents. That hasn't happened until now, according to Sullivan, who says the jury is still out on the new regulations, which will require action starting in December.



Quote for the day:

"The distance between insanity and genius is measured only by success." -- Bruce Feirstein