Daily Tech Digest - December 04, 2022

How Your Organization Can Enhance Its Cybersecurity Posture

You need to be prepared for the worst-case scenario. Most organizations are unaware that they have been breached until their data is held to ransom or has been publicly exposed According to the Information Commissioners Office, "you must report a notifiable breach to the ICO without undue delay, but not later than 72 hours after becoming aware of it. If you take longer than this, you must give reasons for the delay." After you have made the report, how do you go forward on securing the rest of your business? And do you know what steps can be used to lessen the damage done? Controlling the users, logs and security is essential. "This is especially true when regarding data protection and information security. Even more so when this data concerns the handing of financial, personal and/or client-sensitive information," SecurityHQ says. ... How often do you do security testing, and what types of security testing do you do? Do you conduct simulated phishing attacks? Do you have vulnerability management in place? Do you know how secure your firewalls are? Do you conduct red team exercises?


Brooklyn Hospitals Decried for Silence on Cyber Incident

Errol Weiss, chief security officer at the Health Information Sharing and Analysis Center, says a lack of transparency by healthcare organizations dealing with ransomware incidents is a common problem. "Despite being a member of an ISAC, we still see organizations reluctant to share attack details when they are a victim of a cyber incident," he says. Senior leaders at those organizations may not trust the anonymity and trust built into information-sharing processes and may be concerned about further exposure and negative reputational impact from unauthorized disclosures, he says. "Given our incredibly litigious society, internal counsel at the impacted organization may also recommend against disclosure outside the company because it could possibly be used against the firm in future litigation," he says. Many organizations do not realize that they have liability protections involving cyber information sharing under the Cybersecurity Information Sharing Act of 2015, he says. "We just need the government and society to create a culture that rewards sharing and does not punish the victim."


8 things to consider amid cybersecurity vendor layoffs

Layoffs of engineers and developers should be the most concerning for CISOs and security teams, Burn adds, describing them as the “canary in the coalmine” when it comes to spotting and fixing security threats. “Often, when we see some of these early layoffs, they impact recruitment or marketing staff, but that shouldn’t concern you really.” However, if you’re looking on LinkedIn and seeing engineers or developers being laid off, that should give you pause for thought, Burn says. Dickson concurs, adding that sales or marketing cuts are unlikely to affect the ability to get security value from the vendor, but cuts to key service or engineering staff could well do just that. For Thacker, the biggest risks to customers would come from a reduction in DevSecOps staffing, “which would potentially bring about a reduction in security oversight, feature updates, and even impact upon the general availability of the service,” while Yuval Wollman, chief cyber officer and managing director of UST, thinks cuts to innovation and research staff could have a direct impact on a product’s efficiency and reliability as the threat landscape evolves and changes.


Is AI moving too fast for ethics? | The AI Beat

The Stable Diffusion news nearly drowned out the applause and chatter of the previous two days, which was all around Meta’s latest AI research announcement about Cicero, an AI agent that masters the difficult and popular strategy game Diplomacy — showing off the machine’s ability to master negotiation, persuasion and cooperation with humans. In a paper published last week in Science, Cicero is said to have ranked in the top 10 percent of players in an online Diplomacy league and achieved more than double the average score of the human players — by combining language models with strategic reasoning. Even AI critics like Gary Marcus found plenty to cheer about regarding Cicero’s prowess: “Cicero is in many ways a marvel,” he said. “It has achieved by far the deepest and most extensive integration of language and action in a dynamic world of any AI system built to date. It has also succeeded in carrying out complex interactions with humans of a form not previously seen.”


Talent development: 4 upskilling success stories

Career development is a focus for all employees, even entry-level workers, and everyone is given several opportunities to grow their skills and learn new technologies. For example, an entry-level code developer at Altria will be thrown into highly technical work right away, so they gain experience fast. And then throughout their first five to six years with the company, they will be moved around IT departments to work on different projects, gaining more experience and potentially finding out what they’re most passionate about. “In many cases, we’re trying to put them into a role that ultimately is going to make them sweat — it’s going to really challenge them,” says Dan Cornell, vice president and CIO of Altria Group. Employees also go through an annual talent planning review process to assess where they are in their careers, what they aspire to within the organization, and how they want to shape their career moving forward. Managers can identify areas for growth, what skills can be developed, opportunities for training, and potential experiences in other departments they might benefit from.


The Metaverse Could Become a Top Avenue for Cyberattacks in 2023

Privacy will emerge as a major concern in the metaverse, Kaspersky predicted. "As the metaverse experience is universal and does not obey regional data protection laws, such as GDPR, this might create complex conflicts between the requirements of the regulations regarding data breach notification," Kaspersky said. Others have also expressed concern over the increased amount of personal information that will be collected in fully immersive environments via VR headsets and their collection of cameras, microphones, and motion trackers. Many expect the data will reveal a lot about a user's location, appearance, and other private information while also enabling attackers to carry out more sophisticated phishing and social engineering scams. At least some of the attacks in virtual reality and augmented reality environments will involve virtual abuse and sexual assault — such as that involving cases of avatar rape, Kaspersky said. The security vendor pointed to an incident where an avatar associated with a researcher at a nonprofit advocacy group was raped on a metaverse platform owned by Meta as one example of the kind of issues consumers can increasingly run into.


Why Change Management Skills Are Essential To Data-Driven Success

A simple way of looking at change management is to view it as a set of people-related strategies and tactics that can help shift behaviors and mindsets. It’s an essential skill set for everyone who works with data from the Chief Data Officer (CDO) down to junior analysts. Data leaders will be primarily focused on cultural and procedural resistance, whereas analysts may only deal with decisional resistance. The scope will differ across roles, but everyone plays a valuable part in the transformative process. Change management is a deep, multi-faceted subject, and there is a vast body of work on the topic. ... To build momentum with your data initiatives, it’s important to deliver quick wins. Rather than waiting for a long-term payoff, potential skeptics or detractors need to see faster returns. When people get a taste of what’s possible through real-world improvements, it becomes easier for them to envision what the future state with data looks like and get on board with the changes.


5 top qualities you need to become a next-gen CISO

Next-gen CISOs are charismatic, innovative, well-connected, and well-respected individuals across the organization and the security industry. They never waste an opportunity to show the value information security brings to the business. They are increasingly creating reporting structures outside of IT to emphasize their independence. Next-gen CISOs regularly participate in industry events and often share their experiences across social media as well as broadcast and print media, helping to further their reputation and influence.Understands the business, earns trust, and practices empathy Next-gen CISOs need to understand the business context behind day-to-day challenges faced by employees, without which they cannot make the right security decisions. They should help build employee, customer, partner, and business stakeholder trust through regular engagement and collaboration. CISOs must shed their ivory tower mentality and build bridges with those departments and managers known to be critical of information security. 


From capex to opex: Storage procurement options bloom

What we are seeing among storage suppliers is the emergence of consumption models of purchasing for on-site capacity that mirror the ways we buy cloud services. Cloud – in the sense of services delivered remotely – is not always suited to the ways customers work. Some avoid the cloud for reasons of performance, compliance, or risk to security or availability. And so, although true pay-as-you-go storage may have its roots in the cloud, there are now on-site options that bring the same levels of flexibility. These can range from opex-based consumption models in which the hardware remains the supplier’s property and customers pay only for the capacity they use, to fully owned capex spend but where hardware upgrades, as required, are built in. At the opex end of things, customers usually commit to base levels of usage, while upgrades to storage and controller hardware are delivered as required. At the capex end of the spectrum, customers can purchase storage hardware outright. But here, some suppliers now offer the option to buy the hardware while still benefiting from upgrades to storage hardware, with monitoring and predictive analytics.


Event-driven automation: How to build an event-driven automation architecture

In addition to the events topic, we also have a few other messaging pipelines handled by AMQ (create the task, invoke automation and automation results listener). Each of these will be communicating with the services layer which will handle system events, task management, automation invocation and automation results tracking. These services will also be required to communicate with the intelligent router, which will handle the prioritization based on built-in logic set by your organization. And finally, in this network we include the task and execution stores that hold the data being transacted upon throughout these events. The Manage Task microservice will need to log information into the ticketing system, which isn’t required to be on an isolated network, but is depicted as such to clarify it only needs to communicate with that service, and not the entire architecture. Similarly, the Automation Results service will communicate with both the orchestrator and the results listener, but it’s not required for an isolated network if you want to simplify things in your own implementation.



Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad

Daily Tech Digest - December 01, 2022

Data-center requirements should drive network architecture

Fabric architectures for the data center are essential because of the issue of latency. Componentization of applications, the separation of databases from applications, and the increased interactivity of applications overall have combined to make applications sensitive to network delays. That sensitivity is addressed in the data center by fabric or a low switching architectures, but it also impacts the rest of the network. Few CIOs have included latency requirements in their SLAs in the past, but more are doing so now. In 2023, CIMI Corporation survey data shows that over half of the new network contracts written will include latency requirements, up 15% from 2022 and double the level of 2021. Mesh/fabric architectures connect everything to everything else with minimal delay, but universal connectivity isn’t always a good thing. To control connectivity, data-center networks can employ either explicit connection control—software-defined networks (SDN)—or a virtual network. 


UK Companies Fear Reporting Cyber Incidents, Parliament Told

The possibility of regulatory consequences to disclosing incidents drives a wedge between businesses and law enforcement, said Jayan Perera, head of cyber response at London-based Control Risks while testifying Monday before Parliament's Joint Committee on National Security Strategy. "The fear may not be that law enforcement will come and slap the handcuffs on them," Perera told the committee. Rather, they fear that calling police during a cyber incident "will then lead to, you know, some other broader fallout in terms of the regulatory environment." Reporting that allowed businesses to anonymously disclose incidents would result in more data, he suggested. ... Perera wasn't the only one during the hearing to suggest that companies are punished for disclosure. "The comment is also made … that the Americans tend to support their businesses, whereas the other comment also made is that the U.K. tends to find fault when someone gets into trouble," said Lilian Pauline Neville-Jones, a Conservative member of the House of Lords.


Know thy enemy: thinking like a hacker can boost cybersecurity strategy

“There is a misconception security teams have about how hackers target our networks,” says Alex Spivakovsky, who as vice-president of research at security software maker Pentera has studied this topic. “Today, many security teams hyperfocus on vulnerability management and rush to patch [common vulnerabilities and exposures] as quickly as possible because, ultimately, they believe that the hackers are specifically looking to exploit CVEs. In reality, it doesn’t actually reduce their risk significantly, because it doesn’t align with how hackers actually behave.” Spivakovsky, an experienced penetration tester who served with the Israel Defense Forces units responsible for protecting critical state infrastructure, says hackers operate like a business, seeking to minimize resources and maximize returns. In other words, they generally want to put in as little effort as possible to achieve maximum benefit. He says hackers typically follow a certain path of action: once they breach an IT environment and have an active connection, they collect such data as usernames, IP addresses, and email addresses.


Cybersecurity incidents cost organizations $1,197 per employee, per year

Perception Point’s report notes that one of the key challenges for defenders, is that threat actors have changed their attack toolkits beyond email and the web browser, with attacks on cloud-based apps and services, such as collaboration apps and storage, occurring at 60% of the frequency with which they occur on email-based services. Given that Gartner estimates that nearly 80% of workers are using collaboration tools for work, enterprises not only need to be able to prevent cyberattacks across on-premise and cloud environments that are cost-efficient, but they also need a robust incident response process to resolve security incidents in the shortest time possible. “In terms of the potential risk and damages — prevention of attacks has a greater financial impact on the organization,” said Michael Calev, Perception Point’s VP of corporate development and strategy. “One successful breach for an organization can cause damage amounting to millions of dollars — for bigger companies this could mean a significant loss in revenue, production capabilities, and a hit to their reputation, while for smaller companies it could spell disaster and even the end of their ability to operate,” Calev said.


Who Is Watching Your Data?

As data volumes grow, it will become increasingly important to master data observability. A recent study of senior professionals from IDC that was sponsored by my company found that a majority of organizations with the highest data intelligence maturity are on the path toward data quality and data observability. The future is really about what we will observe, and I believe it will move beyond data quality to the volume, frequency and behavior of data. We will start observing the infrastructure side, including how much storage is necessary, how much compute is necessary and how much it is costing. For instance, you might do an integration every night, but suddenly someone has made a small change, and it becomes 100 times more expensive. No one wants that surprise. I expect the scope of what we are observing to expand dramatically into other areas, too, particularly into security and privacy checks to ensure sensitive data is used only in the way it should be. In this cloud world, there are so many possibilities.


AWS CEO urges enterprises to do more in the cloud in the face of economic uncertainty

“If you’re looking to tighten your belt, the cloud is the place to do it,” said Selipsky – because of the flexibility it offers enterprises when it comes to scaling up or down their operations in the face of fluctuating demand. He went on to share the story of app-based holiday rental company Airbnb which, because of its earlier foray into the public cloud, was better equipped to weather the downturn in demand for its services during the Covid-19 pandemic. “Airbnb was already a significant cloud user,” said Selipsky. “And with all their expertise in the cloud, and the efficiencies that they’ve already captured, they were far more prepared than many others when the bottom fell out of the hospitality industry in 2020. “Airbnb was able to take down their cloud spending by 27% – quickly. And then, when the world began to emerge from the worst of the pandemic, Airbnb was able to quickly turn on the cloud infrastructure that they needed, and continue to drive innovation.”


Could Software Issues Delay Widespread Electric Vehicle Adoption?

Key obstacles EV software developers face include software development complexity and the rapid pace of technology evolution, says Mathew Desmond, automotive industry solutions architect at business advisory firm Capgemini Americas. Other challenges include the pressure to continually provide new features to meet customer expectations and the need for enhanced vehicle safety requirements despite an accelerated development pace. Alex Oyler, a director with SBD Automotive, a global research and consulting firm, believes that EV software developers face two primary challenges: dual-track development and immature tools. “Many software developers are trying to develop software for both combustion engine and EV platforms at the same time, essentially doubling the complexity of their software stack,” he explains. Meanwhile, the sophisticated high-performance computers powering many modern EVs require multiple advanced development tools and skillsets. “Most of these tools are immature, with many companies developing tools and skills as they develop their cars.” Oyler says.


API Security: From Defense-in-Depth (DiD) To Zero Trust

Being able to observe security risks is critical in combating targeted attacks. After a hacker has breached the outermost layer of defenses, we need observability mechanisms to identify which traffic is likely the malicious attack traffic. Common means of implementing security observability are honeypots, IDS (Intrusion Detection System), NTA (Network Traffic Analysis), NDR (Network Detection and Response), APT (Advanced Persistent Threat), and threat intelligence. Among them, honeypots are one of the oldest methods. By imitating some high-value targets to set traps for malicious attackers, they can analyze attack behaviors and even help locate attackers. On the other hand, APT detection and some machine learning methods are not intuitive to evaluate. Fortunately, for most enterprise users, simple log collection and analysis, behavior tracking, and digital evidence are enough for the timely detection of abnormal behaviors. Machine learning is an advanced but imperfect technology with some problems like false or missed reports. 


Why security should be on every IT department's end-of-year agenda

For many IT teams, hiring is fraught with inconsistency. This makes the end-of-year agenda extremely important for IT teams and their hiring counterparts. Deciding which employees will be promoted, what new positions can be created, and backfilling employees who have moved on to new roles is a puzzle for both IT department leads and hiring managers. For many organizations, the end of the year means focusing on organizing this turnover ahead of the new year. From reclaiming devices of past employees to redistributing unused licenses to save funds, there are multiple staffing-related tasks to complete before year-end. With this in mind, IT teams must discuss their hiring needs for the new year and what roles they ideally would like to fill by the end of the current year. Many people leave their jobs toward the end of the year, so there will soon be more open positions than usual for cybersecurity employees. Make sure your team is clear and organized on your hiring strategy: If you’re hiring, align on priorities and more emergent vacancies. 


Ending the DevOps vs. Software Engineer Cold War

What’s at the heart of this war? To understand that, let’s unpack two major issues that emerge from this not-so-smooth but all-too-familiar scenario. First, without a common language and clear communication channels, no two parties can work together even on simple tasks, let alone complex ones. Second, even with a common language, all the excess work, context switching, delays, and the inevitable friction, lead to cold-war-level frustration brewing within your organization. Adding to these issues are the blurred lines of responsibility that the DevOps model has created for both software engineering and DevOps (aka operations) teams. But the reality is that: Software engineers want to code, implement features and run them on infrastructure (so the customers can use them), without a lot of hassle and without getting bogged down in the operational details; DevOps want to focus on streamlining and keeping production stable, optimizing infrastructure, improving monitoring and general innovation, without getting sucked into the rabbit hole of end-user (e.g., software engineers’) service and access requests.



Quote for the day:

"The final test of a leader is that he leaves behind him in other men, the conviction and the will to carry on." -- Walter Lippmann

Daily Tech Digest - November 30, 2022

7 lies IT leaders should never tell

Things break, and in most cases, it comes as a surprise. IT consists of many systems requiring different degrees of connectivity and monitoring, making it difficult to know absolutely everything at every moment. The key to minimizing failures is to be proactive rather than simply waiting for bad things to happen. CIOs should not only expect things to break but also be honest about this with their team members and business colleagues. “Eat, sleep, and live that life,” advises Andre Preoteasa, internal IT director at IT business management firm Electric. “There are things you know, things you don’t know, and things you don’t know you don’t know,” he observes. “Write down the first two, then think endlessly about the last one — it will make you more prepared for the unknowns when they happen.” Preoteasa stresses the importance of building and maintaining detailed disaster recovery and business continuity plans. “IT leaders that don’t have [such plans] put the company in a bad position,” he notes. “The exercise alone of writing things down shows you’re thinking about the future.”


Amid Legal Fallout, Cyber Insurers Redefine State-Sponsored Attacks as Act of War

Acts of war are a common insurance exclusion. Traditionally, exclusions required a "hot war," such as what we see in Ukraine today. However, courts are starting to recognize cyberattacks as potential acts of war without a declaration of war or the use of land troops or aircraft. The state-sponsored attack itself constitutes a war footing, the carriers maintain. ... Effectively, Forrester's Valente notes, larger enterprises might have to set aside large stores of cash in case they are hit with a state-sponsored attack. Should insurance carriers be successful in asserting in court that a state-sponsored attack is, by definition, an act of war, no company will have coverage unless they negotiate that into the contract specifically to eliminate the exclusion. When buying cyber insurance, "it is worth having a detailed conversation with the broker to compare so-called 'war exclusions' and determining whether there are carriers offering more favorable terms," says Scott Godes, partner and co-chair of the Insurance Recovery and Counseling Practice and the Data Security & Privacy practice at District of Columbia law firm Barnes & Thornburg.


Top 5 challenges of implementing industrial IoT

Scalability is another challenge faced by professionals trying to make progress with their IIoT implementations. Bain’s 2022 study of IIoT decision-makers indicated that 80% of those who purchase IIoT technology scale fewer than 60% of their planned projects. The top three reasons why those respondents failed to scale their projects were that the integration effort was overly complicated and required too much effort, the associated vendors could not support scaling, and the life cycle support for the project was too expensive or not credible. One of the study’s takeaways was that hardware could help close gaps that prevent company decision-makers from scaling. Another best practice is for people to take a long-term viewpoint with any IIoT project. Some people may only think about what it will take to implement an initial proof of concept. That’s just a starting point. They’ll have to look beyond the early efforts if they want to eventually scale the project, but many of the things learned during the starting phase of a project can be beneficial to know during later stages.


AWS And Blockchain

The customer CIO, an extremely smart person, spoke up, in beautifully-rounded European vowels: “Here’s a use case I’ve been told about that’s on my mind.” He named a region in Asia and explained that the small farmers there mark their landholdings carefully, but then the annual floods sometimes wash the markers away. Then unscrupulous larger landowners use the absence of markers to cut away at the smallholdings of the poorest. “But if the boundary markers were on the blockchain,” he said, “they wouldn’t be able to do that, would they?” ... I thought. Then said “As a lifelong technologist, I’ve always been dubious about technology as a solution to a political problem. It seems a good idea to have a land-registry database but, blockchain or no, I wonder if the large landowners might be able to find another way to fiddle the records and still steal the land? Perhaps this is more about power than boundary markers?” Later in the ensuing discussion I cautiously offered something like the following, locking eyes on the CIO: “There are many among Amazon’s senior engineers who think blockchain is a solution looking for a problem.” He went entirely expressionless and the discussion moved on.

The key message is that before persisting the data into the storage layers (Bronze, Silver, Gold), the data must pass data quality checks and for the corrupted data records that fail the data quality checks to be dealt with separately, before they are written into the storage layer. ... The “Bronze => Silver => Gold” pattern is a type of data flow design , also called a medallion architecture. A medallion architecture is designed to incrementally and progressively improve the structure and quality of data as it flows through each layer of the architecture. This is why it is relevant for today’s article regarding data quality and reliability. ... Generally the data quality requirement become more and more stringent as the data flows from raw to bronze to silver and to gold as the gold layer directly serves the business. You should, by now, have a high-level understanding of what a medallion data design pattern is and why it is relevant for a data quality discussion.


The Digital Skills Gap is Jeopardising Growth

With people staying in workforces longer than ever before and careers spanning five decades becoming the norm, upskilling at a massive scale is needed. However, this need is not fully addressed; a worrying 6 in 10 (58%) people we surveyed in the UK told us that they have already been negatively affected by a lack of digital skills. Organisations can’t just rely on recruiting from a limited pool of digital specialists. More focus is also needed by organisations to upskill their own employees, in both tech and human digital skills. At a recent digital skills panel debate in Manchester, the director of a recruitment agency stated bluntly that: “Many businesses are currently overpaying to bring in external digital skills because of increased competition and this just isn’t sustainable. Upskilling your current teams should be as important as recruiting in new talent to keep costs in check and create a more balanced and loyal workforce.” It’s crucial to upskill employees, not only to get the necessary digital capabilities in our organisations, but to build loyalty and retain valued team members.


Emerging sustainable technologies – expert predictions

AI and automation technologies offer a smart solution, too; they could channel energy when it is plentiful into less time-sensitive uses, such as charging up electric vehicles or heating storage heaters. For example, Drax has looked at ways of combining AI with smart meters to channel our energy use, so that we take advantage of those periods when energy creation exceeds demand. The debate over whether we need new technologies or just need to scale-up existing sustainable technologies has even reached the higher echelons of power. John Kerry, US special presidential envoy for climate, and a certain Bill Gates say we need technologies which haven’t been invented yet. World-renowned climate change scientist Michael Mann disagrees. In his expert opinion, we just need to scale up existing technologies. ... But there is one other application — an application which will create extraordinary opportunity and open the way for many technologies we have been considering up to now. When all of our power is provided by renewables, the total annual supply is likely to exceed total annual demand by a large margin.


Women in IT: Progress in Workforce Culture, But Problems Persist

From Milică's perspective, the greatest challenge facing women in IT today is a lack of role models. “Women need to be the role models who can inspire young minds, especially more women and minority leaders,” she says. “Even at the individual level, each of us -- teachers, parents, and other influential adults -- can plant the seed and grow the understanding among young people of the importance of IT jobs, and how that career path can make a difference in our world and society.” She adds hiring bias and pay inequality, along with the lack of female role models, leaders, and advancement opportunities, all discourage women from pursuing a STEAM career. “Women have to work much harder both to get hired and to advance their careers -- which perhaps explains why 52% of women in cybersecurity hold postgraduate degrees, compared to only 44% of men,” Milică notes. She adds the industry also hasn’t done a great job sparking interest at an early age. “Attention to a career path starts with children as early as elementary school, and by middle or high school, many students will have made their decisions,” she explains.


EPSS explained: How does it compare to CVSS?

EPSS aims to help security practitioners and their organizations improve vulnerability prioritization efforts. There are an exponentially growing number of vulnerabilities in today’s digital landscape and that number is increasing due to factors such as the increased digitization of systems and society, increased scrutiny of digital products, and improved research and reporting capabilities. Organizations generally can only fix between 5% and 20% of vulnerabilities each month, EPSS claims. Fewer than 10% of published vulnerabilities are ever known to be exploited in the wild. Longstanding workforce issues are also at play, such as the annual ISC2 Cybersecurity Workforce Study, which shows shortages exceeding two million cybersecurity professionals globally. These factors warrant organizations having a coherent and effective approach to aid in prioritizing vulnerabilities that pose the highest risk to their organization to avoid wasting limited resources and time. The EPSS model aims to provide some support by producing probability scores that a vulnerability will be exploited in the next 30 days and the scores range between 0 and 1 or 0% and 100%.


Could it be quitting time?

The book tackles a challenge that proves stubbornly difficult for most people. Letting go of anything is hard, especially at a time when pundits tout the power of grit, building resilience, and toughing it out. Duke provides permission to see quitting as not only viable but often preferable, and she explains why people rarely give up at the right time. “Quitting is hard, too hard to do entirely on our own,” she writes. “We as individuals are riddled by the host of biases, like the sunk cost fallacy, endowment effect, status quo bias, and loss aversion, which lead to escalation of commitment. Our identities are entwined in the things that we’re doing. Our instinct is to want to protect that identity, making us stick to things even more.” These biases—some of them unconscious—prompt us to stick with jobs that have lost their appeal or value; hold on to losing stocks long after an inner voice screams “Sell!”; or endure myriad other situations that no longer serve us. Duke focuses far more on the thinking behind the decision to “quit or grit” rather than on the decision’s final outcomes.



Quote for the day:

"Teamwork is the secret that make common people achieve uncommon result." -- Ifeanyi Enoch Onuoha

Daily Tech Digest - November 29, 2022

Cloud-Native Goes Mainstream as CFOs Seek to Monitor Costs

There's interest from the CFO organization in third-party tools for cloud cost management and optimization that can give them a vendor-neutral tool, especially in multicloud environments, according to Forrester analyst Lee Sustar. "The cost management tools from cloud providers are generally fine for tactical decisions on spending but do not always provide the higher level views that the CFO office is looking for," he added. As organizations move to a cloud-native strategy, Sustar said the initiative will often come from the IT enterprise architects and the CTO organization, with backing from the office of the CIO. "Partners of various sorts are often needed in the shift to cloud-native, as they help generalize the lessons from the early adopters," he noted. "Today, organizations new to the cloud are focused not on lifting and shifting existing workloads alone, but modernizing on cloud-native tech. Multicloud container platform vendors offer a more integrated approach that can be tailored to different cloud providers, Sustar added.


Financial services increasingly targeted for API-based cyberattacks

APIs are a core part of how financial services firms are changing their operations in the modern era, Akamai said, given the growing desire for more and more app-based services among the consumer base. The pandemic merely accelerated a growing trend toward remote banking services, which led to a corresponding growth in the use of APIs. With every application and every standardization of how various app functions talk to one another, which creates APIs, the potential target surface for an attacker increases, however. Only high-tech firms and e-commerce companies were more heavily targeted via API exploits than the financial services industry. “Once attackers launch web applications attacks successfully, they could steal confidential data, and in more severe cases, gain initial access to a network and obtain more credentials that could allow them to move laterally,” the report said. “Aside from the implications of a breach, stolen information could be peddled in the underground or used for other attacks. This is highly concerning given the troves of data, such as personal identifiable information and account details, held by the financial services vertical.”


The future of cloud computing in 2023

Gartner research estimates that we exceeded one billion knowledge workers globally in 2019. These workers are defined as those who need to think creatively and deliver conclusions for strategic impact. These are the very people that cloud technology was designed to facilitate. Cloud integrations in many cases can be hugely advanced and mature from an operational standpoint. Businesses have integrated multi-cloud solutions, containerization and continuously learning AI/ML algorithms to deliver truly cutting-edge results, but those results are often not delivered at the scale or speed necessary to make split-second decisions needed to thrive in today’s operating environment. For cloud democratization to be successful, companies need to upskill their knowledge workers and upskill them with the right tools needed to deliver value from cloud analytics. Low-code and no-code tools reduce the experiential hurdle needed to deliver value from in-cloud data, whilst simultaneously delivering on the original vision of cloud technology — giving people the power they need to have their voices heard.


What Makes BI and Data Warehouses Inseparable?

Every effective BI system has a potent DWH at its core. Just because a data warehouse is a platform used to centrally gather, store, and prepare data from many sources for later use in business intelligence and analytics. Consider it as a single repository for all the data needed for BI analyses. Historical and current data are kept structured, ideal for sophisticated querying in a data analytics DWH. Once connected, it produces reports with forecasts, trends, and other visualizations that support practical insights using business intelligence tools. ETL (extract, transform, and load) tools, a DWH database, DWH access tools, and reporting layers are all parts of the business analytics data warehouse. These technologies are available to speed up the data science procedure and reduce or completely do away with the requirement for creating code to handle data pipelines. The ETL tools assist in data extraction from source systems, format conversion, and data loading into the DWH. Structured data for reporting is stored and managed by the database component. 


Covering Data Breaches in an Ethical Way

Ransomware and extortion groups usually publicly release stolen data if a victim doesn't pay. In many cases, the victim organization hasn't publicly acknowledged it has been attacked. Should we write or tweet about that? ... These are victims of crime, and not every organization handles these situations well, but the media can make it worse. Are there exceptions to this rule? Sure. If an organization hasn't acknowledged an incident but numerous media outlets have published pieces, then the incident could be considered public enough. But many people tweet or write stories about victims as soon as their data appears on a leak site. I think that is unfair and plays into the attackers' hands, increasing pressure on victims. Covering Cybercrime Sensitively Using leaked personal details to contact people affected by a data breach is a touchy area. I only do this in very limited circumstances. I did it with one person in the Optus breach. The reason was at that point there were doubts about if the data had originated with Optus. The person also lived down the road from me, so I could talk to them in person.


EU Council adopts the NIS2 directive

NIS2 will set the baseline for cybersecurity risk management measures and reporting obligations across all sectors that are covered by the directive, such as energy, transport, health and digital infrastructure. The revised directive aims to harmonise cybersecurity requirements and implementation of cybersecurity measures in different member states. To achieve this, it sets out minimum rules for a regulatory framework and lays down mechanisms for effective cooperation among relevant authorities in each member state. It updates the list of sectors and activities subject to cybersecurity obligations and provides for remedies and sanctions to ensure enforcement. The directive will formally establish the European Cyber Crises Liaison Organisation Network, EU-CyCLONe, which will support the coordinated management of large-scale cybersecurity incidents and crises. While under the old NIS directive member states were responsible for determining which entities would meet the criteria to qualify as operators of essential services, the new NIS2 directive introduces a size-cap rule as a general rule for identification of regulated entities.


Cybersecurity: How to do More for Less

When assessing your existing security stack, several important questions need to be asked: Are you getting the most out of your tools? How are you measuring their efficiency and effectiveness? Are any tools dormant? And how much automation is being achieved? The same should be asked of your IT stack–is there any bloat and technical debt? Across your IT and security infrastructure, there are often unnecessary layers of complexity in processes, policies and tools that can lead to waste. For example, having too many tools leads to high maintenance and configuration overheads, draining both resources and money. Similarly, technologies that combine on-premises infrastructure and third-party cloud providers require complex management and processes. IT and cybersecurity teams, therefore, need to work together with a clear shared vision to find ways to drive efficiency without reducing security. This requires clarity over roles and responsibilities between security and IT teams for asset management and deployment of security tools. It sounds straightforward but often is not, due to historic approaches to tool rollout.


Being Agile - A Success Story

To better understand the Agile methodology and its concepts, it is crucial to understand the Waterfall methodology. Waterfall is another famous Software Development Life Cycle (SDLC) methodology. This methodology is a strict and linear approach to software development. It aims at a significant project outcome. On the other hand, Agile methodology is an iterative method that delivers results in short intervals. Agile relies on integrating a feedback loop to drive the next iteration of work. The diagram below describes other significant differences between these methodologies. In Waterfall, we define and fix the scope and estimate the resources and time to complete the task. In Agile, the time and resources are fixed (called an "iteration"), and the work is estimated for every iteration. Agile helps estimate and evaluate the work that brings value to the product and the stakeholders. It is always a topic of debate as to which methodology to use for a project. Some projects are better managed with Waterfall, while others are an excellent fit for Agile. 


User Interface Rules That Should Never Be Overlooked

The most important user interface design rule that should never be overlooked is the rule of clarity. Clarity is critical when it comes to user interfaces, says Zeeshan Arif, founder and CEO of Whizpool, a software and website development company. “When you're designing an interface, you need to make sure your users understand what they can do at all times,” Arif advises. This means making sure that buttons are correctly labeled and that there aren't any unexpected changes or surprises that might confuse users. “If a button says ‘delete’, then it should delete whatever it's supposed to delete -- and only that thing,” he says. “If you have a button that does something else, then either make it a different color or label it differently, but don't put in something in that looks like a delete button but doesn't actually delete anything.” Don't perplex users by designing a user interface crammed with superfluous options and/or features. “If you have too many buttons on one page, and none of them are labeled well enough for someone who isn't familiar with them, [users will] probably just give up before they even get started using your product, service, app, or website,” Arif says.


6 non-negotiable skills for CIOs in 2023

CIOs need to think about both internal integrations and external opportunities. They need to have strong relationships and be able to pull the business leaders together. For example, I’m working with an entrepreneurial organization that runs different lines of businesses that are very strong, with heads of those businesses who are also very strong. One of their challenges, however, is that their clients can be customers of multiple businesses. Between the seams, the client experiences the organizational structure of the business, which is a problem – a client should never experience your organizational structure. The person best equipped to identify and close those seams and integration points is the CIO. ... In the past, most organizations operated with a business group that sat between technology and the clients. The movement around agile, however, has knocked those walls down and today allows IT to become client-obsessed – we’re cross-functional teams that are empowered and organized around business and client outcomes. As a CIO, you need to spend time with clients and have a strong internal mission, too. You have to develop great leaders and motivate and engage an entire organization.



Quote for the day:

"A leader has the vision and conviction that a dream can be achieved._ He inspires the power and energy to get it done." -- Ralph Nader

Daily Tech Digest - November 28, 2022

5 ways to avoid IT purchasing regrets

When it comes to technology purchases, another regret can be not moving fast enough. Merim Becirovic, managing director of global IT and enterprise architecture at Accenture, says his clients often wonder whether they’re falling behind. “With the level of technology maturity today, it’s a lot easier to make good decisions and not regret them. But what I do hear are questions around how to get things done faster,” he says. “We’re getting more capabilities all the time, but it’s all moving so quickly that it’s getting harder to keep up.” A lag can mean missed opportunities, Becirovic says, which can produce a should-have-done-better reproach. “It’s ‘I wish I had known, I wish I had done,’” he adds. Becirovic advises CIOs on how to avoid such scenarios, saying they should make technology decisions based on what will add value; shift to the public cloud to create the agility needed to keep pace with and benefit from the quickening pace of IT innovation; and update IT governance practices tailored to overseeing a cloud environment with its consumption-based fees.


5 digital transformation metrics to measure success in 2023

If money (whether earned or saved) is the first pillar of most business metrics, then time is another. That could be time spent or saved (more on that in a moment), but it’s also in the sense of pure speed. "Time to market should be one of the most critical digital transformation metrics right now for enterprises across industries,” says Skye Chalmers, CEO of Image Relay. “The market impact of a digital transformation project is all about its speed: If you don’t cross the finish line first with compelling new customer [or] employee experiences or other digital modernization initiatives, your competitors will.” So while an overall digital transformation strategy may not have an endpoint, per se, the goals or milestones that comprise that strategy should have some time-based measurement. And from Chalmers’ point of view, the speed with which you can deliver should be a key factor in decision-making and measurement. Focusing on the time-to-market metric “will directly improve an enterprise’s competitive position and standing with customers,” Chalmers says.


More Organizations Are Being Rejected for Cyber Insurance — What Can Leaders Do?

Before soliciting cyber insurance quotes, examine several areas of your network security to understand what vulnerabilities exist. Insurers will do just that, so anticipating gaps in your infrastructure, software, and systems will provide you with a clearer idea of what your company needs. Start with your enterprise network. Who has access and to what degree? Every person who has access to your network provides an attack vector, increasing the possibility of an attacker accessing more data through lateral movement. If an outside agent can gain entry to your network, that person or bot can harvest the most privileged credentials and move between servers and throughout the storage infrastructure while continually exploiting valuable sensitive data. That’s why most insurance audits consider privilege sprawl to be among the top risks. It happens when special rights to a system have been granted to too many people. It impacts the cost of premiums and could even lead to a loss of coverage. Public cloud assets also present an opportunity for a strike. Is access to that information secure? 


Retirees Must Have These Four Key Components To Make A Winning Side Hustle

Since when does everything always go as planned? Spoiler Alert: It never does. There’s even a saying for this: “Into each life, a little rain must fall.” And when those rain clouds do appear, what do successful entrepreneurs do? They don’t pack up their gear and head for shelter. No, they plant their feet firmly into the (muddy) ground and start selling umbrellas. “When you study success and read extensively about entrepreneurs, you realize that successful people come from a variety of backgrounds and circumstances, but they have one thing in common—they consistently do the work,” says Case Lane, Founder of Ready Entrepreneur in Los Angeles. “The only talent needed is knowing you can make that commitment to keep working to ensure business success.” Entrepreneurs don’t fear change (see above); they see it as an opportunity. “I knew how to solve a problem that many people were experiencing, and I knew I could help those people,” says Chane Steiner, CEO of Crediful in Scottsdale, Arizona. 


Top 6 security risks associated with industrial IoT

Device hijacking is one of the common security challenges of IIoT. It can occur when the IoT sensor or endpoint is hijacked. This can lead to serious data breaches depending on the intelligence of the sensors as well as the number of devices the sensors are connected to. Sensor breaches or hijacks can easily expose your sensors to malware, enabling the hacker to have control of the endpoint device. With this level of control, hackers can run the manufacturing processes as they wish. ... IIoT deals with many physical endpoint devices that can be stolen if not protected from prying eyes. This situation can pose a security risk to any organization if these devices are used to store sensitive information. Organizations with endpoint devices in great use can make arrangements to ensure that these devices are protected, but storing critical data in them can still raise safety concerns due to the growing number of endpoint attacks. For organizations to minimize the risk associated with device theft, it’s expedient to avoid storing sensitive information on endpoint devices. Instead, they should use cloud-based infrastructure to store critical information.


Cloud security starts with zero trust

Generally speaking, the best way for an organization to approach zero trust is for security teams to take the mindset that the network is already compromised and develop security protocols from there. With this in mind, when implementing zero trust into a cloud environment, organizations must first perform a threat assessment to see where their biggest vulnerabilities lie. Zero trust strategy requires an inventory of every single item in a company’s portfolio, including a list of who and what should and should not be trusted. Additionally, organizations must develop a strong understanding of their current workflows and create a well-maintained inventory of all the company’s assets. After conducting a thorough threat assessment and developing an inventory of key company information, security controls must be specifically designed to address any threats identified during the threat assessment to tailor the zero trust strategy around them. The nature of zero trust is inherently complex due to the significant steps that a company has to take to achieve a true zero trust atmosphere, and this is something that more businesses should take into account.


How to Not Screw Up Your Product Strategy

Creating the strategy also requires influencing and collaborating with many people. All of these interactions require time to get people on the same page, discuss disagreements, and incorporate improvements or changes. Finally, your market can change quickly. New competitors can emerge, technologies change, and customer feedback can shift. These all can result in changes in perspective or emphasis, which can further slow down putting together a product strategy. And finally, even after you’ve done all the hard work putting the strategy together, you have a lot of work to do communicating that strategy and getting people to understand it. This also takes a lot of time. The end result of all these steps is that a common failure mode is “the product strategy is coming." My recommendation is to always have a working product strategy. Because strategy work takes time, you shouldn’t make people wait for it. If you don’t have a real strategy, start with a temporary, short-term strategy, based on your best thinking at the moment. 


Why Microsegmentation is Critical for Securing CI/CD

While cloud-native application development has many benefits, traditional network architectures and security practices cannot keep up with DevOps practices like CI/CD. Microsegmentation reduces network risk and prevents lateral movement by isolating environments and applications. However, it can be a challenge to implement segmentation in a cloud-native environment. Typical network security teams use a centralized approach with one SecOps team responsible for all security management. For example, some networks have ticket-based approval systems where the central team reviews each request based on access policies. However, this system is slow and prone to human error. Teams can use DevOps methods to operationalize microsegmentation, implementing policy as code. You can also leverage a microsegmentation solution that helps automate and secure the process. The security team enforces basic segmentation policies, while application owners create more granular policies. This decentralized security approach preserves the agility of DevOps.


Data Strategy: Synthetic Data and Other Tech for AI's Next Phase

Synthetic data is one of several AI technologies identified by Forrester as less well known but having the power to unlock significant new capabilities. Others on the list are transformer networks, reinforcement learning, federated learning and causal inference. Curran explains that transformer networks use deep learning to accurately summarize large corpuses of text. “They allow for folks like myself to basically create a pretty concise slide based off of a piece of research I’ve written,” he says. “I already use AI-generated images in probably 90% of my presentations at this point in time.” The same base technology of transformer networks and large language models can be used to generate code for enterprise applications, Curran says. Reinforcement learning allows tests of many actions in simulated environments, enabling a large number of micro-experiments that can then be used for constructing models to optimize objectives or constraints, according to Forrester. ... Such a simulation would let you account for your big order, the cost of shutting down at peak season, and other factors in your decision of whether to take that piece of equipment down for maintenance.


Smart office trends to watch

A growing number of office buildings now have an effective Building Management System (BMS). Ideally this will be combined with energy generation and storage and water management systems, which can deliver huge cost, resource and emissions savings, but a good BMS is a good start. It can optimise energy use through smart lighting and temperature systems, controlled by software which draws information from Internet of Things (IoT) or Radio Frequency Identification (RFID) sensors throughout the building. Energy and cost savings are also improved by smart LED lighting, controlled by sensors that ensure it is only used as and when needed. Providers of BMS and related solutions include Smarter Technologies, which uses RFID sensors to monitor energy and water use, temperature, humidity, air quality, room or desk occupancy and even whether bins need emptying. SP Digital’s GET Control system offers IoT and AI-based temperature control, dividing open plan offices into microzones, through which air flow is regulated based on occupancy and both conditions inside and ambient weather conditions outside the building. 



Quote for the day:

"In simplest terms, a leader is one who knows where he wants to go, and gets up, and goes." -- John Erksine