Daily Tech Digest - July 12, 2021

Red teaming – getting prepared for the inevitable

A red teaming exercise is undertaken with the aim of exploring areas that other assessments would overlook to determine the overall attack chain. Unlike a penetration testing exercise, which usually lasts for around a week or two, a red teaming engagement should be considerably longer. The total elapsed time of an engagement will be several months, or even up to a year, with the team carrying out a series of different exercises during that time and allowing time gaps in between. During the exercise, the team works to identify vulnerabilities and formulate plans on how criminals could exploit the identified weaknesses. These could lie within a business’ people, network, company inboxes, or even physical access to offices. There are several stages to a red teaming engagement, both on a technical and physical level. ... The red team will spend a significant portion of time mapping out the various physical and technical access points to an organisation before they attempt to breach. The preparation for a red teaming exercise takes significantly longer than other security assessments, as there is often a very specific set of targets in mind, rather than testing any and every area of the business.


Navigating Active Directory Security: Dangers and Defenses

Threat actors typically need initial access on a domain-joined system in an organization, says Natarajan, and they can achieve it in multiple ways, including spear-phishing emails with malicious attachments, drive-by download attacks, and exploiting a vulnerability in an Internet-facing system. Once a victim runs the malicious binaries, the attacker has a better chance of getting initial access over the system. They could exploit other system flaws to gain administrative privileged access, and AD reconnaissance tools can help them understand the directory structure and choose their targets. Various mis-configurations – which experts agree are plentiful in AD environments – can help them escalate their privileges to domain administrator. "To me, it's almost more attractive because there's not a patch for that," says Will Schroeder, technical architect at SpecterOps, of misconfigurations from an attacker's perspective. "There are ways that people can fix it, but over time this kind of debt and misconfiguration can build up." Because AD systems are so complex, little things can create large security holes over time.


Programming Evolution: How Coding Has Grown Easier in the Past Decade

In the past decade, APIs have played a huge role in the programming evolution. It's easy for developers to have a love-hate relationship with APIs. APIs create additional security risks that programmers need to manage. They often place limits on which functionality you can implement within an API-dependent app because you can only do whatever the API supports. And APIs can become single points of failure for applications that depend centrally on them. On the other hand, APIs make the lives of programmers easier in the sense that they make it fast and simple to integrate disparate services and data. Until about 10 years ago, if you wanted to import data from a third-party platform into your app, you probably would have had to resort to an "ugly" technique--such as scraping the data off of a web interface. Today, you can easily and systematically import the data using the platform's API ... Until about a decade ago, not only were there relatively few open standards that major vendors supported, but companies often went out of their way not to make their platforms compatible with those of external organizations. 


Understanding and stopping 5 popular cybersecurity exploitation techniques

Criminals use stack pivoting to bypass protections like DEP by chaining ROP gadgets in a return-oriented programming attack. With stack pivoting, attacks can pivot from the real stack to a new fake stack, which can be an attacker-controlled buffer such as the heap. The future flow of program execution can be controlled from the heap. While Windows provides export address filtering (EAF), a next-gen cybersecurity solution can provide an access filter that prevents the reading of Windows executables (PE) headers and export/import tables by code, using a special protection flag to protect memory areas. An access filter should also support allowlist so heuristics can be tweaked as needed. ... Many advanced, next-gen cybersecurity solutions place hooks on sensitive API functions to intercept and perform checks, such as antivirus scanning, before allowing the kernel to service the request. Criminals can take advantage of the fact that only sensitive functions are monitored. By calling an unmonitored, non-sensitive function at an offset (to intentionally address an important kernel service instead), cybercriminals can often evade security software. 


AI has become a design problem

All the best data, model, and development practices in the world cannot fully guarantee perfectly behaved AI. In the end, good user interface design has to appropriately present AI to end users. An effective user interface can, for instance, tell the user the provenance of its insight, recommendations, and decisions. ... Historically, UIs presented data as matter-of-fact. Common lists of data were not suspect; they were simply regurgitating what was stored. But increasingly, presentations of data are sourced, culled, and shaped by AI and therefore carry with them the suspect nature of the AI’s curation. UI design must introduce new mechanisms to allow users to inspect data provenance and reasoning and introduce visual cues to better share data confidence and bias to the user. As we navigate the intricacies of a technology already integrated into many of our systems, we must design these systems in a responsible manner, mindful of transparency, privacy, and fairness. Design can frame AI-driven user experiences to end users in a manner that engenders trust and helps the end user understand the scope, strengths, and weaknesses of a given system. In turn, fear and mistrust are alleviated around the mysterious black boxes.


4 Key Observability Metrics for Distributed Applications

Latency is the amount of time it takes between a user performing an action and its final result. For example, if a user adds an item to their shopping cart, the latency would measure the time between the item addition and the moment the user sees a response that indicates its successful addition. If the service responsible for fulfilling this action degraded, the latency would increase, and without an immediate response, the user might wonder whether the site was working at all. To properly track latency in an Impact Data context, it's necessary to follow a single event throughout its entire lifetime. ... Tracking error rates is rather straightforward. Any 5xx (or even 4xx) issued as an HTTP response by your server should be tagged and counted. Even situations that you've accounted for, such as caught exceptions, should be monitored because they still represent a non-ideal state. These issues can act as warnings for deeper problems stemming from defensive coding that doesn't address actual problems. Kuma can capture the error codes and messages thrown by your service, but this represents only a portion of actionable data. 


How to avoid the network-as-a-service shell game

Our Rule One says that your project has to meet financial targets, meaning a target ROI. NaaS makes it easier to figure out whether a project meets CFO targets, but remember that anything sold as a service has to include a profit margin for the seller. The cloud has not replaced every data center, not because of CIO intransigence but because the cloud isn’t always cheaper. NaaS wouldn’t always be cheaper either, so a NaaS-based project is going to have to prove it’s a better strategy than capital purchasing would be. Your trip to the CFO’s office just got more complicated. Another issue with NaaS is cost control. With traditional networking, you pay a fixed amount for fixed capacity. Your cost is predictable. Any kind of consumption-based pricing risks generating some truly eye-popping bills if the usage is greater than expected, and most such systems really don’t make it easy to ensure that excess usage doesn’t happen. Serverless cloud computing customers are already whining over multi-hundred-percent cost overruns. It seems like you can either face your CFO during project approval or face your CFO when you blow your budget. The latter isn’t likely a great career move for you.


What You Need to Know About Ransomware Insurance

Ransomware insurance is like any other type of cyber insurance. "Cyber insurance is about assessing the cyber risk, determining the potential losses due to attacks, and then obtaining coverage," said Bhavani Thuraisingham, a professor at the University of Texas at Dallas, as well as the executive director of the university’s Cyber Security Research and Education Institute. The unique challenge with ransomware is that once an attacker gets into the system, they have access to everything within. "[They aren't] just stealing your data but crippling your system by encrypting all of the data and files so that you can't have access unless you pay them a ransom," she explained. "It's like someone breaking into your house and stealing your jewelry, but also kidnapping your child and demanding a ransom," Thuraisingham quipped. Ransomware insurance is generally sold along with, or in addition to, a general cyber insurance policy. The appropriate cyber liability insurance policy depends primarily on the applicant's industry and operations, observed Jack Dowd an account executive at insurance provider The Dowd Agencies. 


Ensuring digital maturity in the boardroom

Becoming digitally mature allows organisations to future-proof their business. Something that became clear during the pandemic was that the ability to remain agile is paramount. Digital transformation enables this. Utilising cloud technologies gives enterprises the freedom and flexibility to work wherever and however it is necessary. From here, businesses can further foster a flexible culture, promoting a better work-life balance for employees. However, as society climbs back to normal, many within the boardroom will understand that there are more benefits to digital transformation than remote working. Scalability is an essential factor. Technology is not bound to physical restrictions, digital services and solutions can be increased, enhanced and altered at a moment’s notice. This not only helps to keep organisations agile, but also provides the foundations of future growth. These increased levels of scalability and agility combine to enable greater growth and profitability for businesses. Efficient and cost effective processes allow leaders to focus on wider business opportunities, and greater access to data produces better decision making, faster. 


Ransomware Landscape: Notorious REvil Is Only One Operator

Many ransomware-wielding attackers will first attempt to contact victims directly and get them to pay a ransom, promising that if the organization does so quickly, then attackers will never leak their data or attempt to "name and shame" them. Hence the number of victims who simply pay remains unknown. Furthermore, the damage caused by a single attack from a more sophisticated ransomware operation, such as REvil, can be severe. Miami-based Kaseya's software is used by a number of managed service providers to manage clients' endpoints, and up to 60 MSPs and 1,500 of their clients were infected by REvil - aka Sodinokibi - ransomware just in that single attack. REvil has also been tied to the attack against meat-processing giant JBS - who paid attackers an $11 million ransom - and many other attacks. Another operation, called DarkSide, claimed credit for the May attack against Colonial Pipeline Co., which supplies 45% of the fuel used along the East Coast. Shortly after the attack, DarkSide claimed it would shut down its ransomware-as-a-service operation because of unwanted publicity and attention.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - July 11, 2021

4 Ways AI Should Be Playing a Role in Your DX Strategy

The real value of AI lies in the data that it is able to process and analyze. “The backbone of AI and ML is data and in order to get real business value out of AI and ML, you need deep and broad data that covers your entire digital experience ecosystem. Once that data is harnessed and correlated, AI and ML can be a game changer for the enterprise with deep, contextual, and automated insight into your digital experience. AI and ML can then be used to proactively identify investments that will provide the most ROI, accelerate time intensive efforts like root cause analysis, and reduce the workload on your IT team by automating repetitive tasks.” Daniel Fallmann, CEO at Mindbreeze, an insight engine provider, shared his thoughts on how AI is used to analyze data to drive business process transformation. “...you can learn if a customer really needs a specific product or service by using AI to review data from the past, such as published press releases, subscriptions, form information on your website, and more,” Fallmann said. Like Malloy, Fallmann reiterates the value of diving deep and consolidating disparate data in order to reap the benefits of holistic views.


Tech Has Advanced Rapidly—And Cybersecurity Needs To Catch Up

Data is any business’s most critical asset. Like the valuable and confidential items in your home, it’s not easily retrieved once it’s in the wrong hands. Ultimately, when it comes to cyberattacks, it will always be a case of not “if” but “when” an SME will suffer a breach or fall foul to an attack. SMEs must focus on understanding their risks, getting the basics right and creating a strong “human firewall” as the foundation of their cybersecurity strategy. Ask yourself: “Am I protecting my employees, my customers and my reputation? Am I protecting my data and assets?” Starting here will help SMEs understand the risks and focus on the basics that will have the biggest impact. This could consist of installing phishing protection and firewalls across all devices, investing in authentication methods or keeping software and anti-malware up to date. Invest in training to ensure all employees have a true understanding of the cybersecurity risks the business faces, including how to identify phishing scams and what the process is on reporting them. Finally, keep security top of mind, and don’t underestimate its importance.


Microsoft Office Users Warned on New Malware-Protection Bypass

“The malware arrives through a phishing email containing a Microsoft Word document as an attachment. When the document is opened and macros are enabled, the Word document, in turn, downloads and opens another password-protected Microsoft Excel document,” researchers wrote. Next, VBA-based instruction embedded in the Word document reads a specially crafted Excel spreadsheet cell to create a macro. That macro populates an additional cell in the same XLS document with an additional VBA macro, which disables Office defenses. “Once the macros are written and ready, the Word document sets the policy in the registry to ‘Disable Excel Macro Warning,’ and invokes the malicious macro function from the Excel file. The Excel file now downloads the Zloader payload. The Zloader payload is then executed using rundll32.exe,” researchers said. Because Microsoft Office automatically disables macros, the attackers attempt to trick recipients of the email to enable them with a message appearing inside the Word document. “This document created in previous version of Microsoft Office Word. To view or edit this document, please click ‘Enable editing’ button on the top bar, and then click ‘Enable content’,” the message reads.


How cybersecurity is getting AI wrong

Unknown unknowns are so prevalent in cyberspace that many service providers preach to their customers to build their security strategy on the assumption that they’ve already been breached. The challenge for AI models emanates from the fact that these unknown unknowns, or blind spots, are seamlessly incorporated into the models’ training datasets and therefore attain a stamp of approval and might not raise any alarms from AI-based security controls. For example, some security vendors combine a slate of user attributes to create a personalized baseline of a user’s behavior and determine the expected permissible deviations from this baseline. The premise is that these vendors can identify an existing norm that should serve as reference point for their security models. However, this assumption might not hold water. For example, an undiscovered malware may already reside in the customer’s system, existing security controls may suffer from coverage gaps, or unsuspecting users may already be suffering from an ongoing account takeover. Errors: It would not be brazen to assume that even staple security-related training datasets are probably laced with inaccuracies and misrepresentations. 


What are the most common cybersecurity challenges SMEs face today?

The ENISA report provides advice for SMEs to successfully cope with cybersecurity challenges, particularly those resulting from the COVID-19 pandemic. With the current crisis, traditional businesses had to resort to technologies such as QR codes or contactless payments they had never used before. Although SMEs have turned to such new technologies to maintain their business, they often failed to increase their security in relation to these new systems. Research and real-life experience show that well prepared organizations deal with cyber incidents in a much more efficient way than those failing to plan or lacking the capabilities they need to address cyber threats correctly. Juhan Lepassaar, EU Agency for Cybersecurity Executive Director said: “SMEs cybersecurity and support is at the forefront of the EU’s cybersecurity strategy for the digital decade and the Agency is fully dedicated to support the SME community in improving their resilience to successfully transform digitally.” In addition to the report, ENISA also publishes the Cybersecurity Guide for SMEs: “12 steps to securing your business”. 


Your dev team lead is not controlling enough

When I first got promoted to team lead I was highly controlling. I literally did most of my team's work for them. I worked seventeen hours a day six days a week to ensure every single task was completed to my exact specification. The people that worked for me were unhappy (some actively disliked me personally) but we got results that the CEO cared about so it went unnoticed. And I was good at managing up, so I actually got promoted for this behavior! I was in my early twenties and motivated by the wrong things (power, money, and, of course, control). I look back on the period with embarrassment and I've actually apologized to many of the people who worked for me back then. ... when I realized micro-management was wrong, I naturally swung the pendulum in the exact opposite direction. I told myself I was hiring smart people and I should leave them alone. I'm good at hiring so it kind of worked. But, again, the people who worked for me suffered -- this time in a way that they noticed much less. Good people actually want feedback! It's not good for their work to go unchallenged because then it's harder to improve. 


Cyber security too often takes back seat in C-Suite

Chief information security officers are studying these threats daily and are in the best position to communicate what they’ve learned to decision makers. But too often, Hamilton said, CISOs have trouble translating their technical findings for board room audiences. While the top executives could often use with a little more training on the ins and outs of technological threats, information security executives also need to do a much better job of reading the room. CISOs must present their information in terms of risk to the bottom line. “Scary Russian cyber buffer overflow SQL injection ... nobody cares,” Hamilton said. ... “It’s more about being able to say something like, ‘we have 1 million records meeting the definition of personally-identifiable information, and we know that they’re worth about $200 apiece if you’ve got to clean up a data breach. That’s $200 million in potential liability. Can I have $50,000 for controls to reduce that risk in half?” While the knee-jerk reaction with cyber security may be to name an organization’s best technical expert the CISO, that can end up backfiring unless that person is willing to sharpen their understanding of the business they’re trying to protect.


The Rise of the ML Engineer

Just fifty years ago, machine learning was a new idea. Today it’s an integral part of society, helping people do everything from driving cars and finding jobs to getting loans and receiving novel medical treatments. When we think about what the next 50 years of ML will look like, it’s impossible to predict. New, unforeseen advancements in everything from chips and infrastructure to data sources and model observability have the power to change the trajectory of the industry almost overnight. That said, we know that the long run is just a collection of short runs, and in the current run, there is an emerging set of tools and capabilities that are becoming standards for nearly every ML initiative. We have written about the 3 most important ML tools: a feature Store, a model store, and an evaluation store. Click here for a deeper dive. Beyond the tools that power ML initiatives, the roles that shape data teams are also rapidly evolving. As we outline in our ML ecosystem whitepaper, the machine learning workflow can be broken into three stages — data preparation, model building, and production and at every step of the process, the skills and requirements are different:


Cloud computing's destiny: operating as a single global computer, enabled by serverless

For all the progress of what's happening on cloud, we have to "get to the point where we get the cloud to work as if it was a single infinitely powerful computer," says Nagpurkar. Right now, there are too many obstacles in the way, she adds. "Think about the simplicity of just working on your laptop. You have a common operating system tools you you're familiar with. And, most importantly, you're spending most of your time working on code. Developing on the cloud is far from that. You have to understand the nuances of all the cloud providers -- there's AWS, Azure, GCP, IBM, and private clouds. You have to provision cloud resources that might take a while to get online. And you have to worry about things like security, compliance, resiliency, scalability, and cost efficiency. It's just a lot of complexity." Proprietary software stacks from different vendors "not only add to all this complexity but they stifle innovation," she says. "Key software abstractions start with the operating system. Linux as the operating system for the data center era unleashed this proliferation of software, including virtualization technologies like containers. That ushered in the cloud era."


'Barely able to keep up': America's cyberwarriors are spread thin by attacks

Cybersecurity professionals can barely keep up despite significant industry growth in recent years — and plenty more money is pouring in. That money is chasing a limited talent pool, with almost a half-million cybersecurity jobs unfilled, according to CyberSeek, a project that tracks the industry and is sponsored by the federal National Institute of Standards and Technology. The government is also on a massive hiring spree, with the Department of Homeland Security racing to fill more than 2,000 cybersecurity jobs. Secretary Alejandro Mayorkas called it a victory last week that it had recently onboarded almost 300 new employees and offered jobs to 500 more. It’s a problem that some in the cybersecurity industry are hoping to address even in the years to come. The National Cryptologic Foundation, a nonprofit affiliate for the National Security Agency, offers free educational materials to middle schools. The Center for Infrastructure Assurance and Security at the University of Texas at San Antonio has produced free cybersecurity educational games for students in an effort to inspire young people to consider careers in the industry.



Quote for the day:

"The great leaders have always stage-managed their effects." -- Charles de Gaulle

Daily Tech Digest - July 10, 2021

IT leadership: 3 ways to enable continuous improvement

"If technical leadership has one job, it is to tend to that socio-technical system so that team members can spend as much time as possible on priorities that move the business forward, not on trying to navigate or fix internal systems. Pay really close attention to those feedback loops and make them tight. Invest in the work where it makes sense – not just trying to fix everything and make it perfect, but look for where you can get really big wins and not have to reinvent the wheel. “It turns out we have the benefit of knowing all the science that even just five to 10 years ago, we didn’t have. Read Accelerate, for example. There used to be a lot of ‘I think’ statements, and while it worked, it was very ‘cult-y’ and it was very based on osmosis. If you had been lucky enough to work with one of the best engineers in the world, then you kind of knew this stuff – it was all just contagious. Now, we have this freely available data, and we should be using it.” ... “If your job is to align people, motivate, and understand the friction points, you really have to walk a mile in their shoes, and you have to find the outcomes that fix those problems,” says GitHub’s Dana Lawson.


New SaaS Security Report Dives into the Concerns and Plans of CISOs in 2021

For years, security professionals have recognized the need to enhance SaaS security. However, the exponential adoption of Software-as-a-Service (SaaS) applications over 2020 turned slow-burning embers into a raging fire. Organizations manage anywhere from thirty-five to more than a hundred applications. From collaboration tools like Slack and Microsoft Teams to mission-critical applications like SAP and Salesforce, SaaS applications act as the foundation of the modern enterprise. 2020 created an urgent need for security solutions that mitigate SaaS misconfiguration risks. Recognizing the importance of SaaS security, Gartner named a new category, SaaS Security Posture Management (SSPM), to distinguish solutions that have the capabilities to offer a continuous assessment of security risks arising from a SaaS application's deployment. To understand how security teams are currently dealing with their SaaS security posture and what their main concerns are, Adaptive Shield, a leading SSPM solution, commissioned an independent survey of 300 InfoSecurity professionals from North America and Western Europe, in companies ranging from 500 to more than 10,000 employees.


How Enterprise Architects Can Evolve from Order Takers to Trusted Advisors

The value of strong interpersonal skills can’t be overstated when it comes to building trust with executives. Persuasive advisors are those that are confident and clever in how they communicate their ideas to others. That being said, there are also tools EAs can leverage to appeal to executives and secure buy-in for projects. Though they may propose and finance IT change projects, many executives are focused on other areas of business besides IT logistics. As such, EAs won’t get their ideas across by using heavily technical jargon or by presenting complex spaghetti models to them. Instead, they need to address executives’ personal priorities when detailing their proposed solutions: are they looking to cut costs? Make their data stores more compliant? These are the value points that EAs need to address when presenting their ideas. With a platform that enables EAs to manipulate their organizational data to support different viewpoints, EAs can speak to executives’ and stakeholders’ personal interests, instead of turning them away.


Java on Visual Studio Code Update – June 2021

Remote Development has always been a popular feature in Visual Studio Code and it allows developers to use a container for a full-feature development environment. For the upcoming quarters, we are working on supporting more Java versions as well as Spring framework in the containers so developers can access those technology in their remote development scenarios. We have just released support for Java 16 in the remote dev container which is shown in the later sections of this post. In addition, Gtihub Codespaces is a configurable online development environment that allows you to develop entirely in cloud. Visual Studio Code plays a critical role in Codespaces as it provides the essential code editing experience. In terms of Java, the team is working on providing the support for Java language extensions in Codespaces so Java developers can find all Java related tools they need. For details on how to request access for Codespaces, please follow the official Codespaces documentation here. In terms of testing, Visual Studio Code Java is targeting to adopt the new Testing APIs introduced recently. 


OpsRamp’s Ciaran Byrne on managing multicloud and hybrid environments

The biggest challenge is dealing with the complexity. It’s not just a matter of cloud and on-premises; you have networks, servers, storage, virtual environments, containers, and applications that you have to discover and collect metrics on, and those are running in both cloud and on-premises environments. In most cases, you’ll be managing these mixed environments with multiple monitoring tools, leading to tool sprawl. You’ll have to make sense of large volumes of data coming from these mixed environments managed by a diverse toolset. The environments that are mixed will likely have inter-dependencies which may make it difficult to be aware of and troubleshoot issues. Troubleshooting may also be more complicated, as each of the environments will have their own nuances for investigating and resolving issues that require operators and admins to have a broad range of skills. Once you’ve “solved” the problem of monitoring these hybrid environments, you have to understand which parts of this hybrid infrastructure are supporting which application services.


How RPA-As-A-Service Can Power The Next Mid-Market Winner

Current RPAaaS offerings, including the solution we offer at AutomationEdge, are low on code and high on “drag and drop” functionality, making the intuitive user interfaces open to automation across the enterprise. With an RPAaaS solution that has plug-ins that connect to the common systems of record in the enterprise, RPA is no longer an island, but rather a means of intergalactic travel across the enterprise galaxy. ... Full-blown, on-premise RPA implementations come with mature governance models, which are unnecessary at the start of a company’s life. A check-the-box approach to governance that meets all mandatory legal, financial and accounting requirements is adequate for this stage of growth. Not all processes need to be run every day; many are run once a quarter or even annually. Employees can monitor these processes much easier. Another advantage of RPAaaS is having access to a robust RPAaaS community that can help developers keep in touch with the latest hacks, patches and previews that they can then bring into their craft.


Top 10 Chrome Flags you should consider enabling in July 2021

Chrome Flags are basically experimental features that Google is currently testing on either Chrome OS or the Chrome browser. It’s important to note some Flags are exclusive to Chrome OS, while others work on Chrome browsers across Android, iOS, macOS, and Windows. Eventually Flags will be removed as they become part of either a stable Chrome release, or get absorbed into Chrome developer tools. Once you enable a Flag, you need to restart your browser (or restart your machine if you’re on Chrome OS) for the change to take effect. You should realize that enabling Flags does carry some risk. Not all Flags are stable and may cause some unintended behavior with your browser or device. It’s also important to understand that browser-based Flags are not tested for online security protocols. This means you carry some security risk when conducting financial transactions online while using untested Chrome Flags. If you run different versions of Chrome OS or Chrome browser, you can find different Flags available. 


Industrial cybersecurity: How to protect your assets in the digital transformation age

Industrial businesses that embrace transformation and have a holistic view of cybersecurity are benefitting from diverse technology ecosystem development, including connected devices, edge control, apps, analytics and cloud services, which are enhancing business performance at an unprecedented pace. It’s vital that your organization’s approach to security is part of the organizational culture – using components that meet recognized standards and include encryption by default. Security must be integral to the design of any process or operation and fundamentally baked into the services that support the operation of your systems and business objectives. The tsunami of risks focused on operating technology (OT) ranges from the exposure of intellectual property and lost production systems or data to serious fines and reputational loss. Cybersecurity is a multi-faceted discipline requiring a proactive approach across the business. For your business to be best prepared against threats, it’s important to consider the following elements:


5 Skill Sets A Blockchain Developer Must Have

Data Structure is the first essential skill that a Blockchain programmer should have. For the advancement and deployment of systems, blockchain engineers must engage with the data structures skill promptly. The entire Blockchain system is made up of data structures. Moreover, we can also say that a block is indeed a data structure. Because of their encapsulating data structure and the public ledger functioning as the Blockchain, blocks behave as groups of transaction activities associated with the open register. ... Cryptography is a methodology for designing procedures and algorithms to prevent a foreign entity from reading and learning content from personal messages throughout a communication session. Kryptos and Graphein are two old Greek concepts that mean “disguised” and “to record,” respectively, are also used in cryptography. ... The potential to view and collect information across many blockchain systems is known as interoperability. For instance, if someone sends data to some other blockchain, will the receiver read, understand, and respond to it with minimal effort?


5 elements of servant leadership

Setting expectations is quite simply clearly defining what is needed from each individual in the performance of their assigned role. It is all of the what’s and when’s of the position. In some roles and organizations, it may also articulate the how’s. Expectations define the goals, activities, and behaviors that drive the measurable results that the manager is charged with. For employees, it clarifies their contribution to the organization - both what it is and what it should be. It represents the outcomes to which they are being held accountable and enables them to check the objective behaviors or metrics to which they are held accountable. ... Mobilization is about skills and the effective application of those skills to drive the team’s mission. Managers have to ensure that their team is both equipped and deployed to most effectively contribute to the organization. This includes matching team members’ strengths with specific tasks or roles and constantly asking if there are other or alternative contributions that could be made. The most effective managers hold their directs responsible for skills development by setting a vision for what is needed, suggesting methods, and making opportunities available.



Quote for the day:

"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls

Daily Tech Digest - July 09, 2021

How quantum networking could transform the internet

If QC comes to full fruition (an event that is far from certain), conceivably, anyone with access to a quantum computer could decrypt a strongly encrypted communication in mere minutes. Who has access to a quantum computer? Would you believe you do? QC's first practitioners are making quantum systems available for programming by registered experimenters and researchers, no credentials required, in many cases for free. It's not easy, by any means, nor does it make much sense to rational people, but QC service (such as it is) is operational. If QN comes to full fruition (see above), some means of securing communications will become feasible again. But because of the physics principle, QN would leverage any "back door," such as some government agencies would ask network equipment makers to include, would be entirely impossible to engineer. What we have here, to paraphrase Strother Martin, is an open, collaborative, academic effort whose success would immediately trigger "failure to communicate" securely by conventional means. The intellectual property value of any digital encryption technology for classical systems would plummet like a spent fuel rod in a core meltdown.


The Bank of the Future Will Have Data Vaults and Money Vaults

Our belief is that the banks of the future will not necessarily be seen as banks or finance institutions. They will be seen more and more as data hubs or ecosystems for a much broader set of industry verticals. Not only will they provide better financial products and more relevant financial products consumers, but they will be in a perfect position to leverage data to form data alliances and data partnerships with other members of the ecosystem, such as grocery stores, airlines, energy companies, etc. And, it will be up to the user if they are willing to share their information or not. If they share, they will receive benefits and services from members of these alliances, which are complementary businesses that are sharing data based on the consent of the user to receive better, more relevant and higher impact service. ... There are a number of interesting interaction layers that enable banks to interact with their customers better. These could be conversational interfaces, like we have seen with chatbots. There are also voice assistants, like Google Home and Alexa.


Ransomware recovery: Plan for it now

A big ransomware attack is not the time to go it alone. There are resources available that will assist you halting and recovering when it feels like all hell is breaking loose, and there are steps to take that might help authorities catch the criminals. Part of your ransomware-response plan should include the contact information of these resources. If you have a cyber-insurance policy, it can be very helpful. It can put you in touch with specialists to help guide you through your response. Contact them now, before you are attacked, to establish their response process and document it in your plan. If you don’thave such a policy, consider getting one. You should also immediately contact the local field office of the FBI. Its level of involvement in a particular case will be driven by the extent and nature of the attack, but it says that notifying them of all attacks helps them to better respond to ransomware in general. They also have access to tools and resources unavailable to many other organizations that can help especially if it identifies another country as the source. When reaching out for help, beware companies that claim to decrypt the data for you.


Applying Lean Tools and Techniques to Scrum

Scrum teams are beginning to discover that Scrum as a framework can in fact be defined as wasteful. That waste is often masked by in-person interactions, which Scrum has always advocated for. While Scrum pairs well with other Agile frameworks and approaches, Lean, an approach which focuses on waste reduction, is rarely paired with Scrum for this to become visible. This has led to a lack of awareness or focus on the elements of waste that exist. The movement to remote work has brought a lot of changes outside of how a team operates to contend with. You have subpar facilities compared to an office. From simple lack of space, with some friends of mine working off of a kitchen table for over a year now, to not having appropriate ergonomic equipment at home to handle a 40-hour week. That’s before the added strain of other people or families in the house; in my case, I have a 2.5-year-old, an 18-month-old, and a newborn which brings an added dimension to the day! So off the bat, the environment changes are difficult to handle, but when you keep following Scrum as per the Scrum guide, you see a magnification of the issues which I am attributing to waste.


The mirrored world: What are the benefits of digital twins?

Intelligent twins have the power to turn data into actionable, big-picture insights, but only if fueled by complete and accurate data. COVID-19 revealed that enterprises cannot rely on historic data blindly -- they need to check and correct their models as the world changes. This approach requires a strategy for real-time data analytics that includes provisions for both IoT devices and sensors, and tools for data analysis and utilization. When this is achieved, enterprises can unlock a new way of understanding the business, and a new way of running it. Take Ericsson and Vodafone as examples. The two companies are working with e.GO, an electric vehicle manufacturer, to develop a factory of the future. In the factory, machines connect over a 5G network, sending data to a network operations center that powers a digital twin of the factory. The twin is then used to enable just-in-time processes and smart tools that empower human workers with data-driven intelligence. Intelligent twins have powerful simulation capabilities and, with your data foundation in place, they will let you reimagine the innovation process.


Three security lessons from a year of crisis

In the early days of the pandemic, when lockdowns and restrictions across the United States were at their height, contact centers and customer service teams saw huge spikes in call volume. Banks’ phones rang off the hook with questions about account numbers and Paycheck Protection Plan loans, about unemployment payments and mortgage suspensions. Business dealings that could once be conducted face-to-face with a teller in a branch office now moved onto the phone. Contact agents not deemed “essential” pivoted to working from home; new systems had to be devised, implemented, and tested almost overnight. The result? More calls, longer hold times, and longer agent conversations. Of course, not every fraudulent use of a contact center requires speaking with another human being: unscrupulous users can “mine” interactive voice response systems for personal data. In some industries, the second quarter of 2020 saw an 800% increase in year-over-year call volume. Such elevated numbers weren’t sustainable, and there’s been some drop-off in call length and volume, but in the last quarter of 2020, call durations were still up 14% compared to pre-COVID levels, and typical waits were 11 minutes longer than they were before the pandemic.


Who’s behind the Kaseya ransomware attack – and why is it so dangerous?

Hackers infiltrated Kaseya, accessed its customers’ data, and demanded ransom for the data’s return. Making the hack particularly grave, experts say, is that Kaseya is what is known as a “managed service provider”. That means its systems are used by companies too small or modestly resourced to have their own tech departments. Kaseya regularly pushes out updates to its customers meant to ensure the security of their systems. But in this case, those safety features were subverted to push out malicious software to customers’ systems. This hack was particularly egregious because the bad actors behind it had targeted the very systems typically used to protect customers from malicious software, said Doug Schmidt, a professor of computer science at Vanderbilt University. “This is very scary for a lot of reasons – it’s a totally different type of attack than what we have seen before,” Schmidt said. “If you can attack someone through a trusted channel, it’s incredibly pervasive – it’s going to ricochet way beyond the wildest dreams of the perpetrator.” Kaseya has said that between 800 and 1,500 businesses were affected by the hack, although independent researchers have pegged the figure at closer to 2,000.


Ransomware gangs seek people skills for negotiations

People with the appropriate – and not necessarily technical – skillsets to succeed in ransom negotiations are particularly valued, Kela found. “We observed multiple posts [on the dark web] describing a new role in the ransomware ecosystem, negotiators, whose purpose is to force the victim to pay a ransom using insider information and threats,” said Kivilevich. “Victims started using negotiators – while a few years ago there was no such profession, now there is a demand for negotiating services. Ransomware-negotiation specialists partner with the insurance companies and have no lack of clients. Ransom actors had to up their game as well, in order to make good margins. “As most ransom actors probably are not native English speakers, more delicate negotiations – specifically around very high budgets and surrounding complex business situations – required better English. When REvil’s representative was looking for a ‘support’ member of the team to hold negotiations, they specifically mentioned ‘conversational English’ as one of the demands. This is not a new case: actors are interested in native English speakers to use for spear-phishing campaigns.”


Facebook Launches Open Source Simulation Platform Habitat 2.0

Habitat 2.0 prioritises speed and performance over a wide range of simulation capabilities and allows researchers to test new approaches and iterate more effectively. For example, the researchers have used a navigation mesh to move the robot instead of simulating wheel-ground contact. However, the platform does not support non-rigid dynamics such as liquids, films, clothes, and ropes, as well as tactile sensing or audio. This, in a way, makes the Habitat 2.0 simulator two orders of magnitude faster than most 3D simulators available to academics and industry professionals. With these capabilities, researchers can now use the platform to perform complex tasks, such as clearing up the kitchen or setting the table. For instance, the platform can simulate a Fetch robot interacting in ReplicaCAD (a 3D dataset) scenes at 1,200 SPS (steps per second) while the existing platform runs at 10 to 400 SPS. It also scales well, achieving 8,200 SPS (273x real-time) multi-process on a single GPU and nearly 26,000 SPS (850x real-time) on a single node with 8 GPUs. Facebook researchers said such speeds significantly cut down on experimentation time, from six months to as little as two days.


The post-pandemic office: how to prevent burnout

The burnout we’re seeing now is as unique as the year we’ve just lived through, and there are several contributing factors. According to the Flex Study, almost a quarter of employees found themselves working longer hours while remote. Work often brought a welcome sense of normality, and provided distraction for those lucky enough to continue working, but the longer hours took a toll on many employees. As many as 37% of employees admitted to not taking a break of at least 30 minutes during their average work-from-home day. Burnout can stem from an inability to disconnect too, with 60% of UK respondents stating they are unable to switch off from work. Of those, 23% are anxious about getting caught up on work, 22% are distracted and still looking at their phone and 15% feel a sense of guilt. As businesses decide how they will work in the future, they will also need to address video fatigue as another root cause of burnout. Video calls have been the main tool for engaging with colleagues and customers over the past year, and many find this mentally exhausting. To create a more balanced schedule, businesses must set clear expectations about which meetings should be conducted via video, and which meetings can be audio only, as well as promoting meeting-free days altogether.



Quote for the day:

"Most of the successful people I've known are the ones who do more listening than talking." -- Bernard Baruch

Daily Tech Digest - July 08, 2021

Security Problems Worsen as Enterprises Build Hybrid and Multicloud Systems

"Most organizations have had a cloud attack surface for years and didn't know about it," said Andrew Douglas, managing director in cybersecurity at Deloitte & Touche. "They were using SaaS applications and piloting different cloud providers going back a decade." Companies are under increasing pressure to move to the cloud, which the pandemic has accelerated. While security technologies are becoming more comprehensive and robust, IT managers are tempted to skip over the security planning steps and jump straight into putting new solutions into production. "There's been a lot of temptation to move quickly," said Douglas. "Our clients are trying to accelerate their organizations' move to the cloud, but whether they put in the time and investment in implementing security – well, that has been lagging." The biggest challenge faced by those that do want to invest in security planning upfront is getting an accurate view of all their assets. "What do we have out there? What are we spinning out on a daily basis? What are the subscriptions we have in the cloud? What infrastructure as code? What serverless functionality? ..."


What Colonial Pipeline Means for Commercial Building Cybersecurity

Smart buildings are particularly vulnerable to cyberattacks as more Internet of Things devices are deployed and the use of remote management tools increases. While IT systems are typically focused on the core security triad of confidentiality, integrity, and availability of information, the BMS security triad is different. The BMS focus should be on the availability of operational assets, integrity/reliability of the operational process, and confidentiality of operational information. The deployment of a multidisciplinary defense approach across system levels requires a cost-benefit balanced focus on operations, people, and technology. Managing cyber-risks starts with organizational governance and executive-level commitments. This can include developing a cybersecurity strategy with a defined vision, goals, and objectives, as well as metrics, such as the number of building control system vulnerability assessments completed. In addition, senior leadership needs to ensure that the right technologies are procured and deployed, defenses are deployed in layers, access to the BMS via the IT network is limited as much as possible, and detection intrusion technologies are deployed.


Getting the board on board: a cost-benefit analysis approach to cyber security

A cost-benefit analysis is a method used to evaluate a project by comparing its losses and gains — essentially a quantified and qualified list of pros and cons. Undertaking a cost-benefit analysis is a great way to assess projects because it reduces the evaluation complexity to a single figure. As you can imagine, this makes a cost-benefit analysis an invaluable tool when it comes to explaining the intricacies and selling the value of a robust cyber security strategy to your board. One of the most important things to emphasise in your cost-benefit analysis is the trade-off between paying to prevent a mess versus paying to clean up a mess. A recent Cabinet Office report stated the estimated cost of cyber crime to the UK economy is a whopping £27 billion. And when it comes to individual attacks, a Sophos survey in April 2021 found that the average total cost of recovery from a ransomware attack has more than doubled in a year, increasing from $761,106 in 2020 to $1.85 million in 2021. Of course, investing in preventative cyber security measures also comes at a cost. 


OQC Delivers the UK’s first Quantum Computing as-a-Service

OQC’s core innovation, the Coaxmon, solves these challenges using a three-dimensional architecture that moves the control and measurement wiring out of plane and into a 3D configuration. This vastly simplifies fabrication, improving coherence and – crucially – boosting scalability. This key advantage underpins the company’s confidence in its strategy to “build the core and partner with the best”. Just four years after it was founded, having attracted nearly £2m of UK government support and some of the leading scientists and engineers in the field, the pre-Series A startup is now a leader in the “noisy intermediate-scale quantum” (NISQ) era of quantum computing. Yet OQC is doing so with a fundamental advantage when it comes to scaling up to future generations of quantum machines. This radical design innovation and its proven effectiveness so far is driving the company in its mission to help its customers explore the possibilities of quantum advantage. It is also a great example of the value of the National Quantum Technologies Programme in supporting excellent research and the growth of start-ups helping to create a vibrant UK quantum sector.


How I avoid breaking functionality when modifying legacy code

If you were to leave the code as it was written during the first pass (i.e., long functions, a lot of bunched-up code for easy initial understanding and debugging), it would render IDEs powerless. If you cram all capabilities an object can offer into a single, giant function, later, when you're trying to utilize that object, IDEs will be of no help. IDEs will show the existence of one method (which will probably contain a large list of parameters providing values that enforce the branching logic inside that method). So, you won't know how to really use that object unless you open its source code and read its processing logic very carefully. And even then, your head will probably hurt. Another problem with hastily cobbled-up, "bunched-up" code is that its processing logic is not testable. While you can still write an end-to-end test for that code (input values and the expected output values), you have no way of knowing if the bunched-up code is doing any other potentially risky processing. Also, you have no way of testing for edge cases, unusual scenarios, difficult-to-reproduce scenarios, etc. That renders your code untestable, which is a very bad thing to live with.


Regulating digital transformation in Saudi Arabia

Due to the rapidly changing situation, it is necessary to regulate all matters related to digitization and digitalization processes. Therefore, the Kingdom launched the Digital Government Authority (DGA) to serve as an umbrella organization for all digital matters in Saudi Arabia. In today’s article, we shall highlight the scope and powers of the authority. First of all, it is important to get acquainted with the definition of digitization, as knowing the basic concepts from the regulator’s point of view will certainly correct our understanding of all relevant terms and determine their actual purpose. We will start with the most popular term, which is digital transformation. It means transforming and developing business models strategically to become digital models based on data, technology, and communication networks. Hence the role of digital government in supporting administrative, organizational, and operational processes within and between government entities to achieve digital transformation, develop, improve and enable easy and effective access to government information and services. 


White boxes in the enterprise: Why it’s not crazy

The cultural problem is simple; your current network vendor will resent your white-box decision and will likely blame every hiccup or fault you experience on the new gear. When that happens, and you go to your white-box vendor to get their side, they may well blame everything on the software, or they may point the finger back at the proprietary vendor. The software supplier will return the favor by pointing at everyone, and all the players may point at your own integration efforts as the source of the problem. If you had finger-pointing in a two-vendor network, white boxes can make that look like a love feast by comparison. The technical problem is one of management. All network devices have to be managed, and the management systems and practices most enterprises use tend to be tuned to their current devices. White-box management is usually set by the software, and you can’t necessarily expect much of a choice in how the management features work. That means your current network-operations people will have to contend with multiple management choices depending on the devices they use.


Augmenting Organizational Agility Through Learnability Quotient

Architects have an important role to play as well in achieving organizational agility. There are articles and books which talk about achieving the stability of the tech organization via design patterns, reference architectures, best practices, leveraging checklists, etc. One good example is Fundamentals of Software Architecture, by Mark Richards and Neal Ford. To instill and improve the dynamic capability of the organization, technical directions alone are not enough. The architect’s vision and strategy for the organization can only be achieved if the organization is capable of executing with the right knowledge. With ever-changing and evolving tech, having social capital across the organization is vital. More so in growing startups where the organization’s dynamics keep changing rapidly. Focusing efforts on the learnability of the organization by building a learning community and having a learning culture across the organization is crucial and architects become the linchpin for that. Matthew Skelton and Manuel Pais summarize perfectly well: Modern architects must be sociotechnical architects; focusing purely on technical architecture will not take us far.


Ways to Cultivate a Cyber-Aware Culture

Everyone's vulnerable to phishing scams, from the receptionist to the CEO. No one is exempt. If you think you're immune because you have a better grasp on security than everyone else, well, that's not how it works. Security must be everyone's job. The best way to secure a business is to start at the top. A cyber aware culture involves cooperation between departments and ongoing education for all employees, irrespective of how high they are up at the hierarchy. Whether you're filling out a security awareness questionnaire or writing your organization's next policy document, focusing on the following three elements will help you stay true to your cyber aware culture. ... When we talk of a cyber-aware culture, enterprises need to understand that there's more to it than technology. It's about people, processes, culture, and engagement. Security leaders need to take a holistic approach to cyber risk management. The only way to steer clear of cyber fines and compromised customers is by involving employees. Empower your people so they become part of the solution instead of a liability.


Back-office bank UX: the lessons to learn from the Citi-Revlon tale

Any company undertaking digitisation must have a clear understanding of the key service, or services, it provides to end-customers. This should be their first port of call. It is then best practice to map the processes involved in delivering those services, including all people and systems. By gathering this information, technology teams will encounter employees that inhabit distinct roles across lifecycles, including subject experts, business leaders, and end-customers. The goal is to gain a near-complete understanding of the existing landscape from employees. This can then be manifested in a service blueprint, a journey map, or a process framework. The organisation should anticipate that different outputs or levels of depth may result, depending on the scope and scenario. While it may seem like many organisations, especially within banking, would already have this institutional knowledge, we have learned that this type of knowledge usually exists in small pockets, not across entire organisational lines. Gaining some understanding of the existing process across the organisation will therefore help clarify opportunities to converge siloed processes, increase efficiency, improve communications, and drive to any other pre-defined business objectives.



Quote for the day:

"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis

Daily Tech Digest - July 07, 2021

Mind the gap – how close is your data to the edge?

A lesser talked about technology might be the key to unlocking the potential of technologies like SD-WAN, 5G, edge computing, and IoT. Network Functions Virtualisation (NFV) hosting pushes the limits of modern networks by allowing companies to create an on-demand virtual edge to their networks, closing that gap between an enterprise’s network and the various edge devices where transactions and/or data from mission-critical applications are created. Having NFV as part of your SD-WAN and cloud connectivity strategy enables traffic to make the smallest numbers of hops on the public internet before it passes through a private, secure software-defined network (SDN). In other words, you shorten that first mile, and optimise the middle- and last-mile by having your data travel safely and quickly over a private connection. Think about every time you tap your card at a PoS terminal on a shopping trip or a night out. To process this transaction, your financial details need to travel to a central point and back again. How long do you want those details traveling across public infrastructure? With a virtual edge, this distance can be significantly minimised, providing predictable, secure connectivity.


Discovering Symbolic Models From Deep Learning With Inductive Biases

Symbolic Model Framework proposes a general framework to leverage the advantages of both traditional deep learning and symbolic regression. As an example, the study of Graph Networks (GNs or GNNs) can be presented, as they have strong and well-motivated inductive biases that are very well suited to complex problems that can be explained. Symbolic regression is applied to fit the different internal parts of the learned model that operate on a reduced size of representations. A Number of symbolic expressions can also be joined together, giving rise to an overall algebraic equation equivalent to the trained Graph Network. The framework can be applied to more such problems as rediscovering force laws, rediscovering Hamiltonians, and a real-world astrophysical challenge, demonstrating that drastic improvements can be made to generalization, and plausible analytical expressions are being made distilled. Not only can it recover the injected closed-form physical laws for the Newtonian and Hamiltonian examples, but it can also derive a new interpretable closed-form analytical expression that can be useful in astrophysics.


Is Security An Illusion? How A Zero-Trust Approach Can Make It A Reality

One of the first steps to implementing zero-trust measures is to find an infrastructure and website security partner that specializes in zero trust and can provide consultation and solutions. Using a partner like this can enable an easy implementation across your company’s environment and systems. Look for a partner that can identify segments and microsegments important to your organization and ensure that security measures are implemented company-wide. This includes thinking about the IT environment in segments and microsegments, which include data like PII or customer data as separate areas that need to be accessed independently. For example, if your organization has implemented a third-party application or applications that support HR functions (including payroll, employee information and more), these partners can help ensure that individuals with access to one of the components (or segments) will not be able to access any of the other segments without separate authorization. Multifactor authentication (MFA) is also a core component of zero-trust implementation.


Common Linux vulnerabilities admins need to detect and fix

Perhaps, from a security perspective, the most important individual piece of software running on a consumer PC is your web browser. Because it's the tool you use most to connect to the internet, it's also going to be the target of the most attacks. It's therefore important to make sure you've incorporated the latest patches in the release version you have installed and to make sure you're using the most recent release version. Besides not using unpatched browsers, you should also consider the possibility that, by design, your browser itself might be spying on you. Remember, your browser knows everywhere on the internet you've been and everything you've done. And, through the use of objects like cookies (data packets web hosts store on your computer so they'll be able to restore a previous session state), complete records of your activities can be maintained. Bear in mind that you "pay" for the right to use most browsers -- along with many "free" mobile apps -- by being shown ads. Consider also that ad publishers like Google make most of their money by targeting you with as for the kinds of products you're likely to buy and, in some cases, by selling your private data to third parties.


Agile Methodology Finally Infiltrates the C-Suite

What started as a snowball slowly gathered mass and speed, launching a genuine agile revolution. Today, that revolution has finally reached the executive suite. Agility is officially a C-level concern. In fact, a recent survey of senior business leaders, conducted by IDC and commissioned by ServiceNow, found that 90% of European CEOs consider agility critical to their company's success. That’s no surprise after a year in which the ability to react quickly and effectively to new business challenges took center stage. CEOs are increasingly aware of the success agile companies enjoy. In my experience, these organizations are well-positioned to create great customer and employee experiences, drive productivity, and attract and retain the best talent. However, rather than talking about agility as a standalone C-suite priority, I see it as a foundational enabler of these three key organizational priorities. ... When measured against five types of organizational agility, IDC found that one third of organizations sit in the lower “static” or “disconnected” tiers, while nearly half are categorized as the “in motion” middle tier of the agility journey. Just one in five (21%) have attained the top levels of “synchronized” and “agile.”


Preparing for your migration from on-premises SIEM to Azure Sentinel

Many organizations today are making do with siloed, patchwork security solutions even as cyber threats are becoming more sophisticated and relentless. As the industry’s first cloud-native SIEM and SOAR (security operation and automated response) solution on a major public cloud, Azure Sentinel uses machine learning to dramatically reduce false positives, freeing up your security operations (SecOps) team to focus on real threats. Moving to the cloud allows for greater flexibility—data ingestion can scale up or down as needed, without requiring time-consuming and expensive infrastructure changes. Because Azure Sentinel is a cloud-native SIEM, you pay for only the resources you need. In fact, The Forrester Total Economic Impact™ (TEI) of Microsoft Azure Sentinel found that Azure Sentinel is 48 percent less expensive than traditional on-premises SIEMs. And Azure Sentinel’s AI and automation capabilities provide time-saving benefits for SecOps teams, combining low-fidelity alerts into potential high-fidelity security incidents to reduce noise and alert fatigue.


Exclusive Q&A: Neuralink’s Quest to Beat the Speed of Type

Most machine learning is open-loop. Say you have an image and you analyze it with a model and then produce some results, like detecting the faces in a photograph. You have some inference you want to do, but how quickly you do it doesn't generally matter. But here the user is in the loop—the user is thinking about moving and the decoder is, in real time, decoding those movement intentions, and then taking some action. It has to act very quickly because if it’s too slow, it doesn't matter. If you throw a ball to me and it takes my BMI five seconds to infer that I want to move my arm forward—that’s too late. I’ll miss the ball. So the user changes what they’re doing based on visual feedback about what the decoder does: That’s what I mean by closed loop. The user makes a motor intent; it’s decoded by the Neuralink device; the intended motion is enacted in the world by physically doing something with a cursor or a robot arm; the user sees the result of that action; and that feedback influences what motor intent they produce next. I think the closest analogy outside of BMI is the use of a virtual reality headset—if there’s a big lag between what you do and what you see on your headset, it’s very disorienting, because it breaks that closed-loop system.


Inside Google’s Quantum AI Campus

Google Quantum AI Lab connects the different pieces of quantum computing together using open source software and Cloud APIs. Its products include open-source libraries such as Cirq, OpenFermion, and TensorFlow Quantum. Cirq is Google’s way of defining and modifying quantum circuits. It allows programmers to design and create quantum circuits, analyse it using simulators and then send it to hardware using Google’s quantum computing service API. OpenFermion, on the other hand, is a library for quantum simulations, especially quantum chemistry and electronic structure calculations. The error-corrected quantum computer will be the size of a tennis court, said Eirk Lucero, Lead Engineer, Quantum Operations and Site Lead at Google Santa Barbara. Within the quantum computer, one million qubits will operate in concert, directed by a surface code error correction. Marissa Giustina, Quantum Electronics Engineer and Research Scientist at Google, is part of the team building the cryogenic hardware that facilitates information transfer. She said the systems inside the campus look like racks of electronics operating at room temperature, and connected to a big cylinder– the dilution refrigerator, as seen below.

Scum uses an iterative approach to Product Development, which means that the team delivers usable work frequently. This small change is significant because by delivering usable work, the Scrum team is learning by doing and is not building up technical debt, or unfinished work, which will need to be completed later. By delivering a done, usable increment each Sprint, the Scrum team eliminates waste and can gather feedback on what was delivered, enabling faster learning. The graphic below by Henrik Knilberg is my favorite example of this concept. ... In the end, the team met its goal of delivering the required business functionality by the deadline. I’m confident this would have been impossible without the Scrum framework. By making decisions based on what they knew at the time and adapting to what they learned through experience, the team also became more efficient. By focusing only on the essentials, the team was able to eliminate waste. And, by delivering usable work frequently, the team was able to gain more transparency about the work that was ahead of them, eliminating the accumulation of technical debt.


Autonomous Security Is Essential if the Edge Is to Scale Properly

The emerging problem in front of us is how to deliver secure dynamic networking at this extreme scale while meeting the economic and security requirements of the various tiers of service. For instance, SD-WAN for large enterprises has a price point that allows for manual life-cycle operations, but secure networking for small and midsize business and the IIoT do not. Security policy operations are manually arbitrated in the enterprise market today, but such manual operations will never work at the future scales we're talking about. Finally, enterprise networks are relatively static, but the fully connected, smart-everything world ahead of us will feature highly dynamic, zero-trust networks at extreme scale. The upshot: For secure networking to function at the scale and price we need, it must become autonomous. When you unpack the nature of cloud-delivered secure endpoint services, you immediately discover the common limiting factor for cost, security, scale, and reliability: people. Right now, secure network operations are manually arbitrated. For example, deployment of a single secure SD-WAN endpoint takes five to nine worker-hours.



Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman