Daily Tech Digest - December 05, 2022

Is SASE right for your organization? 5 key questions to ask

Many analysts say that SASE is particularly beneficial for mid-market companies because it replaces multiple, and often on-premises, tools with a unified cloud service. Many large enterprises, on the other hand, will not only have legacy constraints to consider, but they may also prefer to take a layered security approach with best-of-breed security tools. Another factor to consider is that the SASE offering might be presented as a consolidated solution, but if you dig a little deeper is might actually be a collection of different tools from various partnering vendors, or features obtained through acquisition that have not been fully integrated. Depending on the service provider, SASE offers a unified suite of security services, including but not limited to encryption, multifactor authentication, threat protection, Data Leak Prevention (DLP), DNS, and traditional firewall services. ... With incumbents such as Cisco, VMware, and HPE all rolling out SASE services, enterprises with existing vendor relationships may be able to adopt SASE without needing to worry much about protecting previous investments.


How gamifying cyber training can improve your defences

Gamification is an attempt to enhance systems and activities by creating similar experiences to those in games, in order to motivate and engage users, while building their confidence. This is typically done through the application of game-design elements and game principles (dynamics and mechanics) in non-game contexts. Research into gamification has proved that it has positive effects. ... Gamification has been dismissed by some as a fad, but the application of elements found within game playing, such as competing or collaborating with others and scoring points, can effectively translate into staff training and improve engagement and interest. “The way that cyber security training sessions are happening is changing and it’s for the better,” says Helen McCullagh, a cyber risk specialist for an end-user organisation. “If you look at the engagement of sitting people down and them doing a one-hour course every year, then it is merely a box-ticking exercise. Organisations are trying to get 100% compliance, but what you have are people sitting there doing their shopping list.”


The 3 Phases Of The Metaverse

There are several misconceptions about the metaverse today. In simple terms, the metaverse is the convergence of physical and digital on a digital plane. In its ideal phase, you can access the metaverse from anywhere, just like the internet. Early metaverse apps were focused on creating games with tokenized incentives (play-to-earn) and hadn’t initially been thought of as contributing to the next phase of the internet. One of the most prominent examples is the online game Second Life, which is regarded as the earliest web2-based metaverse platform. Users have an identity projected through an avatar and participate in activities—very much a limited “second” life. ... Unlike the previous phase, Phase 2 is all about creating utilities. Brands, IP holders and companies investing in innovation have been collaborating with gaming metaverse dApps to understand consumer behaviors and economic dynamics. No-coding tools, as well as software development kits, in this phase, are empowering the end user to co-create alongside developers, designers, brands and retail investors. Still, interoperability—the import and export of digital assets—is only possible on a single chain, and the user experience is still seen as gaming in 2-D or 3-D environments.


Why the Agile approach might not be working for your projects

Although Scrum is a well-described methodology, when applied in practice it is often tailored to the specific circumstances of the organisation. These adaptations are often called ScrumBut (“we use Scrum, but …”). Some deviations from the fundamental principles of Scrum, however, may be problematic. These undesirable deviations are called anti-patterns — bad habits formed and influenced by the human factor. What exactly can we consider an anti-pattern? It can be a disagreement on whether or not the task is completed, a disruption caused by the customer, unclear items in the backlog, the indecisiveness of stakeholders (customers, management, etc.), and lack of authority or poor technical knowledge on the part of the Scrum master. We collected detailed information in three Scrum teams using a variety of data collection procedures over a sustained period of time — including observation, surveys, secondary data, and semi-structured interviews – to get a detailed understanding of anti-patterns, and their causes and consequences.


Rise of Data and Asynchronization Hyped Up at AWS re:Invent

Because it was believed that asynchronous programming was difficult, he said, operating systems tended to have restrained interfaces. “If you wanted to write to the disk, you got blocked until the block was written,” Vogels said. Change began to emerge in the 1990s, he said, with operating systems designed from the ground up to expose asynchrony to the world. “Windows NT was probably the first one to have asynchronous communication or interaction with devices as a first principle in the kernel.” Linux, Vogels said, did not pick up asynchrony until the early 2000s. The benefit of asynchrony, he said, is it is natural compared with the illusion of synchrony. When compute systems are tightly coupled together, it could lead to widespread failure if something goes wrong, Vogels said. With asynchronous systems, everything is decoupled. “The most important thing is that this is an architecture that can evolve very easily without have to change any of the other components,” he said. “It is a natural way of isolating failures. If any of the components fails, the whole system continues to work.”


Entity Framework Fundamentals

EF has two ways of managing your database. In this tutorial, I will explain only one of them; code first. The other one is the database first. There is a big difference between them, but code first is the most used. But before we dive in, I want to explain both approaches. Database first is used when there is already a database present and the database will not be managed by code. Code first is used when there is no current database, but you want to create one. I like code first much more because I can write entities (these are basically classes with properties) and let EF update the database accordingly. It's just C# and I don't have to worry about the database much. I can create a class, tell EF it's an entity, update the database, and all is done! Database first is the other way around. You let the database 'decide' what kind of entities you get. You create the database first and create your code accordingly. ... With Entity Framework, it all starts with a context. It associates entities and relationships with an actual database. Entity Framework comes with DbContext, which is the context that we will be using in our code.


How Executive Coaching Can Help You Level Up Your Organization

As we all know, the desire for personal growth is extremely valuable- however, as employee demands from the workplace have shifted, leadership skills have not. As employees climb the ranks, they find their way into leadership without necessarily learning the skills and techniques required to lead. Many new leaders turn to a trusted mentor who would only provide information based on lived experience. On the other hand, executive coaches are tasked with improving performances and capabilities as their day job. But there is a misconception that executive coaches are for leaders who have done something wrong. While it's true that an executive coach could support a difficult employee become a better teammate, they can also be guides for leaders to pursue their desired career paths. Leadership coaching explains that the main drivers of innovation in an organization are the people and the corporate culture, and it can provide leaders with the tools to master these levers. An executive coaching professional can guide leaders through the steps that allow them to set the foundations of an innovative and competitive company.


Ransomware: Is there hope beyond the overhyped?

The old way of thinking about cyber security was imagining it like a castle. You’ve got the vast perimeter – the castle walls – and inside was the keep, where employees and data would live. But now organisations are operating in various locations. They’ve got their cloud estate in one or more providers, source code residing in another location, and vast amounts of work devices that are now no longer behind the castle walls, but at employees’ homes – the list could go on for ever. These are all areas that could potentially be breached and used to gain intelligence on the business. The attack surface is growing, and the castle wall can no longer circle around all these places to protect them. Attack surface management will play a big part in tackling this issue. It allows security and IT teams to almost visualise the external parts of the business and identify targets and assesses risks based on the opportunities they present to a malicious attacker. In the face of a constantly growing attack surface, this can enable businesses to establish a proactive security approach and adopt principles such as assume breach and cyber resilience.


How data analysts can help CIOs bridge the tech talent shortfall

Business analytics are only as good as the data they’re using. Given the wealth and complexity of data, it’s easy to understand why leaders are often overwhelmed in their attempts to access better analytics and insights. This is where data professionals can help. Data scientists and analysts are statistics, math, databases, and systems experts. They are especially adept at looking at historical metrics, recognizing patterns, pulling in market insights, and identifying outlier data to ensure the best points are utilized. They’re also able to organize vast amounts of unstructured data, which is often very valuable but difficult to analyze, by leveraging conventional databases and other tools to make the data more actionable. ... It’s also important to look at the attributes of the data scientists and analysts themselves. In addition to having technical skills, data professionals with a background in programming, data visualization, and machine learning are also highly valuable. On the non-technical side, they should have strong interpersonal and communication skills to relay their findings to the tech team and those without a tech or math background.


What Does Technical Debt Tell You?

Making most architectural decisions at the beginning of a project, often before the QARs are precisely defined, results in an upfront architecture that may not be easy to evolve and will probably need to be significantly refactored when the QARs are better defined. Contrastingly, having a continuous flow of architectural decisions as part of each Sprint results in an agile architecture that can better respond to QAR changes. Almost every architectural decision is a trade-off between at least two QARs. For example, consider security vs. usability. Regardless of the decision being made, it is likely to increase technical debt, either by making the system more vulnerable by giving priority to usability or making it less usable by giving priority to security. Either way, this will need to be addressed at some point in the future, as the user population increases, and the initial decision to prioritize one QAR over the other may need to be reversed to keep the technical debt manageable. Other examples include scalability vs. modifiability, and scalability vs. time to market. These decisions are often characterized as "satisficing", i.e., "good enough". 



Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr

Daily Tech Digest - December 04, 2022

How Your Organization Can Enhance Its Cybersecurity Posture

You need to be prepared for the worst-case scenario. Most organizations are unaware that they have been breached until their data is held to ransom or has been publicly exposed According to the Information Commissioners Office, "you must report a notifiable breach to the ICO without undue delay, but not later than 72 hours after becoming aware of it. If you take longer than this, you must give reasons for the delay." After you have made the report, how do you go forward on securing the rest of your business? And do you know what steps can be used to lessen the damage done? Controlling the users, logs and security is essential. "This is especially true when regarding data protection and information security. Even more so when this data concerns the handing of financial, personal and/or client-sensitive information," SecurityHQ says. ... How often do you do security testing, and what types of security testing do you do? Do you conduct simulated phishing attacks? Do you have vulnerability management in place? Do you know how secure your firewalls are? Do you conduct red team exercises?


Brooklyn Hospitals Decried for Silence on Cyber Incident

Errol Weiss, chief security officer at the Health Information Sharing and Analysis Center, says a lack of transparency by healthcare organizations dealing with ransomware incidents is a common problem. "Despite being a member of an ISAC, we still see organizations reluctant to share attack details when they are a victim of a cyber incident," he says. Senior leaders at those organizations may not trust the anonymity and trust built into information-sharing processes and may be concerned about further exposure and negative reputational impact from unauthorized disclosures, he says. "Given our incredibly litigious society, internal counsel at the impacted organization may also recommend against disclosure outside the company because it could possibly be used against the firm in future litigation," he says. Many organizations do not realize that they have liability protections involving cyber information sharing under the Cybersecurity Information Sharing Act of 2015, he says. "We just need the government and society to create a culture that rewards sharing and does not punish the victim."


8 things to consider amid cybersecurity vendor layoffs

Layoffs of engineers and developers should be the most concerning for CISOs and security teams, Burn adds, describing them as the “canary in the coalmine” when it comes to spotting and fixing security threats. “Often, when we see some of these early layoffs, they impact recruitment or marketing staff, but that shouldn’t concern you really.” However, if you’re looking on LinkedIn and seeing engineers or developers being laid off, that should give you pause for thought, Burn says. Dickson concurs, adding that sales or marketing cuts are unlikely to affect the ability to get security value from the vendor, but cuts to key service or engineering staff could well do just that. For Thacker, the biggest risks to customers would come from a reduction in DevSecOps staffing, “which would potentially bring about a reduction in security oversight, feature updates, and even impact upon the general availability of the service,” while Yuval Wollman, chief cyber officer and managing director of UST, thinks cuts to innovation and research staff could have a direct impact on a product’s efficiency and reliability as the threat landscape evolves and changes.


Is AI moving too fast for ethics? | The AI Beat

The Stable Diffusion news nearly drowned out the applause and chatter of the previous two days, which was all around Meta’s latest AI research announcement about Cicero, an AI agent that masters the difficult and popular strategy game Diplomacy — showing off the machine’s ability to master negotiation, persuasion and cooperation with humans. In a paper published last week in Science, Cicero is said to have ranked in the top 10 percent of players in an online Diplomacy league and achieved more than double the average score of the human players — by combining language models with strategic reasoning. Even AI critics like Gary Marcus found plenty to cheer about regarding Cicero’s prowess: “Cicero is in many ways a marvel,” he said. “It has achieved by far the deepest and most extensive integration of language and action in a dynamic world of any AI system built to date. It has also succeeded in carrying out complex interactions with humans of a form not previously seen.”


Talent development: 4 upskilling success stories

Career development is a focus for all employees, even entry-level workers, and everyone is given several opportunities to grow their skills and learn new technologies. For example, an entry-level code developer at Altria will be thrown into highly technical work right away, so they gain experience fast. And then throughout their first five to six years with the company, they will be moved around IT departments to work on different projects, gaining more experience and potentially finding out what they’re most passionate about. “In many cases, we’re trying to put them into a role that ultimately is going to make them sweat — it’s going to really challenge them,” says Dan Cornell, vice president and CIO of Altria Group. Employees also go through an annual talent planning review process to assess where they are in their careers, what they aspire to within the organization, and how they want to shape their career moving forward. Managers can identify areas for growth, what skills can be developed, opportunities for training, and potential experiences in other departments they might benefit from.


The Metaverse Could Become a Top Avenue for Cyberattacks in 2023

Privacy will emerge as a major concern in the metaverse, Kaspersky predicted. "As the metaverse experience is universal and does not obey regional data protection laws, such as GDPR, this might create complex conflicts between the requirements of the regulations regarding data breach notification," Kaspersky said. Others have also expressed concern over the increased amount of personal information that will be collected in fully immersive environments via VR headsets and their collection of cameras, microphones, and motion trackers. Many expect the data will reveal a lot about a user's location, appearance, and other private information while also enabling attackers to carry out more sophisticated phishing and social engineering scams. At least some of the attacks in virtual reality and augmented reality environments will involve virtual abuse and sexual assault — such as that involving cases of avatar rape, Kaspersky said. The security vendor pointed to an incident where an avatar associated with a researcher at a nonprofit advocacy group was raped on a metaverse platform owned by Meta as one example of the kind of issues consumers can increasingly run into.


Why Change Management Skills Are Essential To Data-Driven Success

A simple way of looking at change management is to view it as a set of people-related strategies and tactics that can help shift behaviors and mindsets. It’s an essential skill set for everyone who works with data from the Chief Data Officer (CDO) down to junior analysts. Data leaders will be primarily focused on cultural and procedural resistance, whereas analysts may only deal with decisional resistance. The scope will differ across roles, but everyone plays a valuable part in the transformative process. Change management is a deep, multi-faceted subject, and there is a vast body of work on the topic. ... To build momentum with your data initiatives, it’s important to deliver quick wins. Rather than waiting for a long-term payoff, potential skeptics or detractors need to see faster returns. When people get a taste of what’s possible through real-world improvements, it becomes easier for them to envision what the future state with data looks like and get on board with the changes.


5 top qualities you need to become a next-gen CISO

Next-gen CISOs are charismatic, innovative, well-connected, and well-respected individuals across the organization and the security industry. They never waste an opportunity to show the value information security brings to the business. They are increasingly creating reporting structures outside of IT to emphasize their independence. Next-gen CISOs regularly participate in industry events and often share their experiences across social media as well as broadcast and print media, helping to further their reputation and influence.Understands the business, earns trust, and practices empathy Next-gen CISOs need to understand the business context behind day-to-day challenges faced by employees, without which they cannot make the right security decisions. They should help build employee, customer, partner, and business stakeholder trust through regular engagement and collaboration. CISOs must shed their ivory tower mentality and build bridges with those departments and managers known to be critical of information security. 


From capex to opex: Storage procurement options bloom

What we are seeing among storage suppliers is the emergence of consumption models of purchasing for on-site capacity that mirror the ways we buy cloud services. Cloud – in the sense of services delivered remotely – is not always suited to the ways customers work. Some avoid the cloud for reasons of performance, compliance, or risk to security or availability. And so, although true pay-as-you-go storage may have its roots in the cloud, there are now on-site options that bring the same levels of flexibility. These can range from opex-based consumption models in which the hardware remains the supplier’s property and customers pay only for the capacity they use, to fully owned capex spend but where hardware upgrades, as required, are built in. At the opex end of things, customers usually commit to base levels of usage, while upgrades to storage and controller hardware are delivered as required. At the capex end of the spectrum, customers can purchase storage hardware outright. But here, some suppliers now offer the option to buy the hardware while still benefiting from upgrades to storage hardware, with monitoring and predictive analytics.


Event-driven automation: How to build an event-driven automation architecture

In addition to the events topic, we also have a few other messaging pipelines handled by AMQ (create the task, invoke automation and automation results listener). Each of these will be communicating with the services layer which will handle system events, task management, automation invocation and automation results tracking. These services will also be required to communicate with the intelligent router, which will handle the prioritization based on built-in logic set by your organization. And finally, in this network we include the task and execution stores that hold the data being transacted upon throughout these events. The Manage Task microservice will need to log information into the ticketing system, which isn’t required to be on an isolated network, but is depicted as such to clarify it only needs to communicate with that service, and not the entire architecture. Similarly, the Automation Results service will communicate with both the orchestrator and the results listener, but it’s not required for an isolated network if you want to simplify things in your own implementation.



Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad

Daily Tech Digest - December 01, 2022

Data-center requirements should drive network architecture

Fabric architectures for the data center are essential because of the issue of latency. Componentization of applications, the separation of databases from applications, and the increased interactivity of applications overall have combined to make applications sensitive to network delays. That sensitivity is addressed in the data center by fabric or a low switching architectures, but it also impacts the rest of the network. Few CIOs have included latency requirements in their SLAs in the past, but more are doing so now. In 2023, CIMI Corporation survey data shows that over half of the new network contracts written will include latency requirements, up 15% from 2022 and double the level of 2021. Mesh/fabric architectures connect everything to everything else with minimal delay, but universal connectivity isn’t always a good thing. To control connectivity, data-center networks can employ either explicit connection control—software-defined networks (SDN)—or a virtual network. 


UK Companies Fear Reporting Cyber Incidents, Parliament Told

The possibility of regulatory consequences to disclosing incidents drives a wedge between businesses and law enforcement, said Jayan Perera, head of cyber response at London-based Control Risks while testifying Monday before Parliament's Joint Committee on National Security Strategy. "The fear may not be that law enforcement will come and slap the handcuffs on them," Perera told the committee. Rather, they fear that calling police during a cyber incident "will then lead to, you know, some other broader fallout in terms of the regulatory environment." Reporting that allowed businesses to anonymously disclose incidents would result in more data, he suggested. ... Perera wasn't the only one during the hearing to suggest that companies are punished for disclosure. "The comment is also made … that the Americans tend to support their businesses, whereas the other comment also made is that the U.K. tends to find fault when someone gets into trouble," said Lilian Pauline Neville-Jones, a Conservative member of the House of Lords.


Know thy enemy: thinking like a hacker can boost cybersecurity strategy

“There is a misconception security teams have about how hackers target our networks,” says Alex Spivakovsky, who as vice-president of research at security software maker Pentera has studied this topic. “Today, many security teams hyperfocus on vulnerability management and rush to patch [common vulnerabilities and exposures] as quickly as possible because, ultimately, they believe that the hackers are specifically looking to exploit CVEs. In reality, it doesn’t actually reduce their risk significantly, because it doesn’t align with how hackers actually behave.” Spivakovsky, an experienced penetration tester who served with the Israel Defense Forces units responsible for protecting critical state infrastructure, says hackers operate like a business, seeking to minimize resources and maximize returns. In other words, they generally want to put in as little effort as possible to achieve maximum benefit. He says hackers typically follow a certain path of action: once they breach an IT environment and have an active connection, they collect such data as usernames, IP addresses, and email addresses.


Cybersecurity incidents cost organizations $1,197 per employee, per year

Perception Point’s report notes that one of the key challenges for defenders, is that threat actors have changed their attack toolkits beyond email and the web browser, with attacks on cloud-based apps and services, such as collaboration apps and storage, occurring at 60% of the frequency with which they occur on email-based services. Given that Gartner estimates that nearly 80% of workers are using collaboration tools for work, enterprises not only need to be able to prevent cyberattacks across on-premise and cloud environments that are cost-efficient, but they also need a robust incident response process to resolve security incidents in the shortest time possible. “In terms of the potential risk and damages — prevention of attacks has a greater financial impact on the organization,” said Michael Calev, Perception Point’s VP of corporate development and strategy. “One successful breach for an organization can cause damage amounting to millions of dollars — for bigger companies this could mean a significant loss in revenue, production capabilities, and a hit to their reputation, while for smaller companies it could spell disaster and even the end of their ability to operate,” Calev said.


Who Is Watching Your Data?

As data volumes grow, it will become increasingly important to master data observability. A recent study of senior professionals from IDC that was sponsored by my company found that a majority of organizations with the highest data intelligence maturity are on the path toward data quality and data observability. The future is really about what we will observe, and I believe it will move beyond data quality to the volume, frequency and behavior of data. We will start observing the infrastructure side, including how much storage is necessary, how much compute is necessary and how much it is costing. For instance, you might do an integration every night, but suddenly someone has made a small change, and it becomes 100 times more expensive. No one wants that surprise. I expect the scope of what we are observing to expand dramatically into other areas, too, particularly into security and privacy checks to ensure sensitive data is used only in the way it should be. In this cloud world, there are so many possibilities.


AWS CEO urges enterprises to do more in the cloud in the face of economic uncertainty

“If you’re looking to tighten your belt, the cloud is the place to do it,” said Selipsky – because of the flexibility it offers enterprises when it comes to scaling up or down their operations in the face of fluctuating demand. He went on to share the story of app-based holiday rental company Airbnb which, because of its earlier foray into the public cloud, was better equipped to weather the downturn in demand for its services during the Covid-19 pandemic. “Airbnb was already a significant cloud user,” said Selipsky. “And with all their expertise in the cloud, and the efficiencies that they’ve already captured, they were far more prepared than many others when the bottom fell out of the hospitality industry in 2020. “Airbnb was able to take down their cloud spending by 27% – quickly. And then, when the world began to emerge from the worst of the pandemic, Airbnb was able to quickly turn on the cloud infrastructure that they needed, and continue to drive innovation.”


Could Software Issues Delay Widespread Electric Vehicle Adoption?

Key obstacles EV software developers face include software development complexity and the rapid pace of technology evolution, says Mathew Desmond, automotive industry solutions architect at business advisory firm Capgemini Americas. Other challenges include the pressure to continually provide new features to meet customer expectations and the need for enhanced vehicle safety requirements despite an accelerated development pace. Alex Oyler, a director with SBD Automotive, a global research and consulting firm, believes that EV software developers face two primary challenges: dual-track development and immature tools. “Many software developers are trying to develop software for both combustion engine and EV platforms at the same time, essentially doubling the complexity of their software stack,” he explains. Meanwhile, the sophisticated high-performance computers powering many modern EVs require multiple advanced development tools and skillsets. “Most of these tools are immature, with many companies developing tools and skills as they develop their cars.” Oyler says.


API Security: From Defense-in-Depth (DiD) To Zero Trust

Being able to observe security risks is critical in combating targeted attacks. After a hacker has breached the outermost layer of defenses, we need observability mechanisms to identify which traffic is likely the malicious attack traffic. Common means of implementing security observability are honeypots, IDS (Intrusion Detection System), NTA (Network Traffic Analysis), NDR (Network Detection and Response), APT (Advanced Persistent Threat), and threat intelligence. Among them, honeypots are one of the oldest methods. By imitating some high-value targets to set traps for malicious attackers, they can analyze attack behaviors and even help locate attackers. On the other hand, APT detection and some machine learning methods are not intuitive to evaluate. Fortunately, for most enterprise users, simple log collection and analysis, behavior tracking, and digital evidence are enough for the timely detection of abnormal behaviors. Machine learning is an advanced but imperfect technology with some problems like false or missed reports. 


Why security should be on every IT department's end-of-year agenda

For many IT teams, hiring is fraught with inconsistency. This makes the end-of-year agenda extremely important for IT teams and their hiring counterparts. Deciding which employees will be promoted, what new positions can be created, and backfilling employees who have moved on to new roles is a puzzle for both IT department leads and hiring managers. For many organizations, the end of the year means focusing on organizing this turnover ahead of the new year. From reclaiming devices of past employees to redistributing unused licenses to save funds, there are multiple staffing-related tasks to complete before year-end. With this in mind, IT teams must discuss their hiring needs for the new year and what roles they ideally would like to fill by the end of the current year. Many people leave their jobs toward the end of the year, so there will soon be more open positions than usual for cybersecurity employees. Make sure your team is clear and organized on your hiring strategy: If you’re hiring, align on priorities and more emergent vacancies. 


Ending the DevOps vs. Software Engineer Cold War

What’s at the heart of this war? To understand that, let’s unpack two major issues that emerge from this not-so-smooth but all-too-familiar scenario. First, without a common language and clear communication channels, no two parties can work together even on simple tasks, let alone complex ones. Second, even with a common language, all the excess work, context switching, delays, and the inevitable friction, lead to cold-war-level frustration brewing within your organization. Adding to these issues are the blurred lines of responsibility that the DevOps model has created for both software engineering and DevOps (aka operations) teams. But the reality is that: Software engineers want to code, implement features and run them on infrastructure (so the customers can use them), without a lot of hassle and without getting bogged down in the operational details; DevOps want to focus on streamlining and keeping production stable, optimizing infrastructure, improving monitoring and general innovation, without getting sucked into the rabbit hole of end-user (e.g., software engineers’) service and access requests.



Quote for the day:

"The final test of a leader is that he leaves behind him in other men, the conviction and the will to carry on." -- Walter Lippmann

Daily Tech Digest - November 30, 2022

7 lies IT leaders should never tell

Things break, and in most cases, it comes as a surprise. IT consists of many systems requiring different degrees of connectivity and monitoring, making it difficult to know absolutely everything at every moment. The key to minimizing failures is to be proactive rather than simply waiting for bad things to happen. CIOs should not only expect things to break but also be honest about this with their team members and business colleagues. “Eat, sleep, and live that life,” advises Andre Preoteasa, internal IT director at IT business management firm Electric. “There are things you know, things you don’t know, and things you don’t know you don’t know,” he observes. “Write down the first two, then think endlessly about the last one — it will make you more prepared for the unknowns when they happen.” Preoteasa stresses the importance of building and maintaining detailed disaster recovery and business continuity plans. “IT leaders that don’t have [such plans] put the company in a bad position,” he notes. “The exercise alone of writing things down shows you’re thinking about the future.”


Amid Legal Fallout, Cyber Insurers Redefine State-Sponsored Attacks as Act of War

Acts of war are a common insurance exclusion. Traditionally, exclusions required a "hot war," such as what we see in Ukraine today. However, courts are starting to recognize cyberattacks as potential acts of war without a declaration of war or the use of land troops or aircraft. The state-sponsored attack itself constitutes a war footing, the carriers maintain. ... Effectively, Forrester's Valente notes, larger enterprises might have to set aside large stores of cash in case they are hit with a state-sponsored attack. Should insurance carriers be successful in asserting in court that a state-sponsored attack is, by definition, an act of war, no company will have coverage unless they negotiate that into the contract specifically to eliminate the exclusion. When buying cyber insurance, "it is worth having a detailed conversation with the broker to compare so-called 'war exclusions' and determining whether there are carriers offering more favorable terms," says Scott Godes, partner and co-chair of the Insurance Recovery and Counseling Practice and the Data Security & Privacy practice at District of Columbia law firm Barnes & Thornburg.


Top 5 challenges of implementing industrial IoT

Scalability is another challenge faced by professionals trying to make progress with their IIoT implementations. Bain’s 2022 study of IIoT decision-makers indicated that 80% of those who purchase IIoT technology scale fewer than 60% of their planned projects. The top three reasons why those respondents failed to scale their projects were that the integration effort was overly complicated and required too much effort, the associated vendors could not support scaling, and the life cycle support for the project was too expensive or not credible. One of the study’s takeaways was that hardware could help close gaps that prevent company decision-makers from scaling. Another best practice is for people to take a long-term viewpoint with any IIoT project. Some people may only think about what it will take to implement an initial proof of concept. That’s just a starting point. They’ll have to look beyond the early efforts if they want to eventually scale the project, but many of the things learned during the starting phase of a project can be beneficial to know during later stages.


AWS And Blockchain

The customer CIO, an extremely smart person, spoke up, in beautifully-rounded European vowels: “Here’s a use case I’ve been told about that’s on my mind.” He named a region in Asia and explained that the small farmers there mark their landholdings carefully, but then the annual floods sometimes wash the markers away. Then unscrupulous larger landowners use the absence of markers to cut away at the smallholdings of the poorest. “But if the boundary markers were on the blockchain,” he said, “they wouldn’t be able to do that, would they?” ... I thought. Then said “As a lifelong technologist, I’ve always been dubious about technology as a solution to a political problem. It seems a good idea to have a land-registry database but, blockchain or no, I wonder if the large landowners might be able to find another way to fiddle the records and still steal the land? Perhaps this is more about power than boundary markers?” Later in the ensuing discussion I cautiously offered something like the following, locking eyes on the CIO: “There are many among Amazon’s senior engineers who think blockchain is a solution looking for a problem.” He went entirely expressionless and the discussion moved on.

The key message is that before persisting the data into the storage layers (Bronze, Silver, Gold), the data must pass data quality checks and for the corrupted data records that fail the data quality checks to be dealt with separately, before they are written into the storage layer. ... The “Bronze => Silver => Gold” pattern is a type of data flow design , also called a medallion architecture. A medallion architecture is designed to incrementally and progressively improve the structure and quality of data as it flows through each layer of the architecture. This is why it is relevant for today’s article regarding data quality and reliability. ... Generally the data quality requirement become more and more stringent as the data flows from raw to bronze to silver and to gold as the gold layer directly serves the business. You should, by now, have a high-level understanding of what a medallion data design pattern is and why it is relevant for a data quality discussion.


The Digital Skills Gap is Jeopardising Growth

With people staying in workforces longer than ever before and careers spanning five decades becoming the norm, upskilling at a massive scale is needed. However, this need is not fully addressed; a worrying 6 in 10 (58%) people we surveyed in the UK told us that they have already been negatively affected by a lack of digital skills. Organisations can’t just rely on recruiting from a limited pool of digital specialists. More focus is also needed by organisations to upskill their own employees, in both tech and human digital skills. At a recent digital skills panel debate in Manchester, the director of a recruitment agency stated bluntly that: “Many businesses are currently overpaying to bring in external digital skills because of increased competition and this just isn’t sustainable. Upskilling your current teams should be as important as recruiting in new talent to keep costs in check and create a more balanced and loyal workforce.” It’s crucial to upskill employees, not only to get the necessary digital capabilities in our organisations, but to build loyalty and retain valued team members.


Emerging sustainable technologies – expert predictions

AI and automation technologies offer a smart solution, too; they could channel energy when it is plentiful into less time-sensitive uses, such as charging up electric vehicles or heating storage heaters. For example, Drax has looked at ways of combining AI with smart meters to channel our energy use, so that we take advantage of those periods when energy creation exceeds demand. The debate over whether we need new technologies or just need to scale-up existing sustainable technologies has even reached the higher echelons of power. John Kerry, US special presidential envoy for climate, and a certain Bill Gates say we need technologies which haven’t been invented yet. World-renowned climate change scientist Michael Mann disagrees. In his expert opinion, we just need to scale up existing technologies. ... But there is one other application — an application which will create extraordinary opportunity and open the way for many technologies we have been considering up to now. When all of our power is provided by renewables, the total annual supply is likely to exceed total annual demand by a large margin.


Women in IT: Progress in Workforce Culture, But Problems Persist

From Milică's perspective, the greatest challenge facing women in IT today is a lack of role models. “Women need to be the role models who can inspire young minds, especially more women and minority leaders,” she says. “Even at the individual level, each of us -- teachers, parents, and other influential adults -- can plant the seed and grow the understanding among young people of the importance of IT jobs, and how that career path can make a difference in our world and society.” She adds hiring bias and pay inequality, along with the lack of female role models, leaders, and advancement opportunities, all discourage women from pursuing a STEAM career. “Women have to work much harder both to get hired and to advance their careers -- which perhaps explains why 52% of women in cybersecurity hold postgraduate degrees, compared to only 44% of men,” Milică notes. She adds the industry also hasn’t done a great job sparking interest at an early age. “Attention to a career path starts with children as early as elementary school, and by middle or high school, many students will have made their decisions,” she explains.


EPSS explained: How does it compare to CVSS?

EPSS aims to help security practitioners and their organizations improve vulnerability prioritization efforts. There are an exponentially growing number of vulnerabilities in today’s digital landscape and that number is increasing due to factors such as the increased digitization of systems and society, increased scrutiny of digital products, and improved research and reporting capabilities. Organizations generally can only fix between 5% and 20% of vulnerabilities each month, EPSS claims. Fewer than 10% of published vulnerabilities are ever known to be exploited in the wild. Longstanding workforce issues are also at play, such as the annual ISC2 Cybersecurity Workforce Study, which shows shortages exceeding two million cybersecurity professionals globally. These factors warrant organizations having a coherent and effective approach to aid in prioritizing vulnerabilities that pose the highest risk to their organization to avoid wasting limited resources and time. The EPSS model aims to provide some support by producing probability scores that a vulnerability will be exploited in the next 30 days and the scores range between 0 and 1 or 0% and 100%.


Could it be quitting time?

The book tackles a challenge that proves stubbornly difficult for most people. Letting go of anything is hard, especially at a time when pundits tout the power of grit, building resilience, and toughing it out. Duke provides permission to see quitting as not only viable but often preferable, and she explains why people rarely give up at the right time. “Quitting is hard, too hard to do entirely on our own,” she writes. “We as individuals are riddled by the host of biases, like the sunk cost fallacy, endowment effect, status quo bias, and loss aversion, which lead to escalation of commitment. Our identities are entwined in the things that we’re doing. Our instinct is to want to protect that identity, making us stick to things even more.” These biases—some of them unconscious—prompt us to stick with jobs that have lost their appeal or value; hold on to losing stocks long after an inner voice screams “Sell!”; or endure myriad other situations that no longer serve us. Duke focuses far more on the thinking behind the decision to “quit or grit” rather than on the decision’s final outcomes.



Quote for the day:

"Teamwork is the secret that make common people achieve uncommon result." -- Ifeanyi Enoch Onuoha

Daily Tech Digest - November 29, 2022

Cloud-Native Goes Mainstream as CFOs Seek to Monitor Costs

There's interest from the CFO organization in third-party tools for cloud cost management and optimization that can give them a vendor-neutral tool, especially in multicloud environments, according to Forrester analyst Lee Sustar. "The cost management tools from cloud providers are generally fine for tactical decisions on spending but do not always provide the higher level views that the CFO office is looking for," he added. As organizations move to a cloud-native strategy, Sustar said the initiative will often come from the IT enterprise architects and the CTO organization, with backing from the office of the CIO. "Partners of various sorts are often needed in the shift to cloud-native, as they help generalize the lessons from the early adopters," he noted. "Today, organizations new to the cloud are focused not on lifting and shifting existing workloads alone, but modernizing on cloud-native tech. Multicloud container platform vendors offer a more integrated approach that can be tailored to different cloud providers, Sustar added.


Financial services increasingly targeted for API-based cyberattacks

APIs are a core part of how financial services firms are changing their operations in the modern era, Akamai said, given the growing desire for more and more app-based services among the consumer base. The pandemic merely accelerated a growing trend toward remote banking services, which led to a corresponding growth in the use of APIs. With every application and every standardization of how various app functions talk to one another, which creates APIs, the potential target surface for an attacker increases, however. Only high-tech firms and e-commerce companies were more heavily targeted via API exploits than the financial services industry. “Once attackers launch web applications attacks successfully, they could steal confidential data, and in more severe cases, gain initial access to a network and obtain more credentials that could allow them to move laterally,” the report said. “Aside from the implications of a breach, stolen information could be peddled in the underground or used for other attacks. This is highly concerning given the troves of data, such as personal identifiable information and account details, held by the financial services vertical.”


The future of cloud computing in 2023

Gartner research estimates that we exceeded one billion knowledge workers globally in 2019. These workers are defined as those who need to think creatively and deliver conclusions for strategic impact. These are the very people that cloud technology was designed to facilitate. Cloud integrations in many cases can be hugely advanced and mature from an operational standpoint. Businesses have integrated multi-cloud solutions, containerization and continuously learning AI/ML algorithms to deliver truly cutting-edge results, but those results are often not delivered at the scale or speed necessary to make split-second decisions needed to thrive in today’s operating environment. For cloud democratization to be successful, companies need to upskill their knowledge workers and upskill them with the right tools needed to deliver value from cloud analytics. Low-code and no-code tools reduce the experiential hurdle needed to deliver value from in-cloud data, whilst simultaneously delivering on the original vision of cloud technology — giving people the power they need to have their voices heard.


What Makes BI and Data Warehouses Inseparable?

Every effective BI system has a potent DWH at its core. Just because a data warehouse is a platform used to centrally gather, store, and prepare data from many sources for later use in business intelligence and analytics. Consider it as a single repository for all the data needed for BI analyses. Historical and current data are kept structured, ideal for sophisticated querying in a data analytics DWH. Once connected, it produces reports with forecasts, trends, and other visualizations that support practical insights using business intelligence tools. ETL (extract, transform, and load) tools, a DWH database, DWH access tools, and reporting layers are all parts of the business analytics data warehouse. These technologies are available to speed up the data science procedure and reduce or completely do away with the requirement for creating code to handle data pipelines. The ETL tools assist in data extraction from source systems, format conversion, and data loading into the DWH. Structured data for reporting is stored and managed by the database component. 


Covering Data Breaches in an Ethical Way

Ransomware and extortion groups usually publicly release stolen data if a victim doesn't pay. In many cases, the victim organization hasn't publicly acknowledged it has been attacked. Should we write or tweet about that? ... These are victims of crime, and not every organization handles these situations well, but the media can make it worse. Are there exceptions to this rule? Sure. If an organization hasn't acknowledged an incident but numerous media outlets have published pieces, then the incident could be considered public enough. But many people tweet or write stories about victims as soon as their data appears on a leak site. I think that is unfair and plays into the attackers' hands, increasing pressure on victims. Covering Cybercrime Sensitively Using leaked personal details to contact people affected by a data breach is a touchy area. I only do this in very limited circumstances. I did it with one person in the Optus breach. The reason was at that point there were doubts about if the data had originated with Optus. The person also lived down the road from me, so I could talk to them in person.


EU Council adopts the NIS2 directive

NIS2 will set the baseline for cybersecurity risk management measures and reporting obligations across all sectors that are covered by the directive, such as energy, transport, health and digital infrastructure. The revised directive aims to harmonise cybersecurity requirements and implementation of cybersecurity measures in different member states. To achieve this, it sets out minimum rules for a regulatory framework and lays down mechanisms for effective cooperation among relevant authorities in each member state. It updates the list of sectors and activities subject to cybersecurity obligations and provides for remedies and sanctions to ensure enforcement. The directive will formally establish the European Cyber Crises Liaison Organisation Network, EU-CyCLONe, which will support the coordinated management of large-scale cybersecurity incidents and crises. While under the old NIS directive member states were responsible for determining which entities would meet the criteria to qualify as operators of essential services, the new NIS2 directive introduces a size-cap rule as a general rule for identification of regulated entities.


Cybersecurity: How to do More for Less

When assessing your existing security stack, several important questions need to be asked: Are you getting the most out of your tools? How are you measuring their efficiency and effectiveness? Are any tools dormant? And how much automation is being achieved? The same should be asked of your IT stack–is there any bloat and technical debt? Across your IT and security infrastructure, there are often unnecessary layers of complexity in processes, policies and tools that can lead to waste. For example, having too many tools leads to high maintenance and configuration overheads, draining both resources and money. Similarly, technologies that combine on-premises infrastructure and third-party cloud providers require complex management and processes. IT and cybersecurity teams, therefore, need to work together with a clear shared vision to find ways to drive efficiency without reducing security. This requires clarity over roles and responsibilities between security and IT teams for asset management and deployment of security tools. It sounds straightforward but often is not, due to historic approaches to tool rollout.


Being Agile - A Success Story

To better understand the Agile methodology and its concepts, it is crucial to understand the Waterfall methodology. Waterfall is another famous Software Development Life Cycle (SDLC) methodology. This methodology is a strict and linear approach to software development. It aims at a significant project outcome. On the other hand, Agile methodology is an iterative method that delivers results in short intervals. Agile relies on integrating a feedback loop to drive the next iteration of work. The diagram below describes other significant differences between these methodologies. In Waterfall, we define and fix the scope and estimate the resources and time to complete the task. In Agile, the time and resources are fixed (called an "iteration"), and the work is estimated for every iteration. Agile helps estimate and evaluate the work that brings value to the product and the stakeholders. It is always a topic of debate as to which methodology to use for a project. Some projects are better managed with Waterfall, while others are an excellent fit for Agile. 


User Interface Rules That Should Never Be Overlooked

The most important user interface design rule that should never be overlooked is the rule of clarity. Clarity is critical when it comes to user interfaces, says Zeeshan Arif, founder and CEO of Whizpool, a software and website development company. “When you're designing an interface, you need to make sure your users understand what they can do at all times,” Arif advises. This means making sure that buttons are correctly labeled and that there aren't any unexpected changes or surprises that might confuse users. “If a button says ‘delete’, then it should delete whatever it's supposed to delete -- and only that thing,” he says. “If you have a button that does something else, then either make it a different color or label it differently, but don't put in something in that looks like a delete button but doesn't actually delete anything.” Don't perplex users by designing a user interface crammed with superfluous options and/or features. “If you have too many buttons on one page, and none of them are labeled well enough for someone who isn't familiar with them, [users will] probably just give up before they even get started using your product, service, app, or website,” Arif says.


6 non-negotiable skills for CIOs in 2023

CIOs need to think about both internal integrations and external opportunities. They need to have strong relationships and be able to pull the business leaders together. For example, I’m working with an entrepreneurial organization that runs different lines of businesses that are very strong, with heads of those businesses who are also very strong. One of their challenges, however, is that their clients can be customers of multiple businesses. Between the seams, the client experiences the organizational structure of the business, which is a problem – a client should never experience your organizational structure. The person best equipped to identify and close those seams and integration points is the CIO. ... In the past, most organizations operated with a business group that sat between technology and the clients. The movement around agile, however, has knocked those walls down and today allows IT to become client-obsessed – we’re cross-functional teams that are empowered and organized around business and client outcomes. As a CIO, you need to spend time with clients and have a strong internal mission, too. You have to develop great leaders and motivate and engage an entire organization.



Quote for the day:

"A leader has the vision and conviction that a dream can be achieved._ He inspires the power and energy to get it done." -- Ralph Nader