Daily Tech Digest - June 16, 2022

High-Bandwidth Memory (HBM) delivers impressive performance gains

In addition to widening the bus in order to boost bandwidth, HBM technology shrinks down the size of the memory chips and stacks them in an elegant new design form. HBM chips are tiny when compared to graphics double data rate (GDDR) memory, which it was originally designed to replace. 1GB of GDDR memory chips take up 672 square millimeters versus just 35 square millimeters for 1GB of HBM. Rather than spreading out the transistors, HBM is stacked up to 12 layers high and connected with an interconnect technology called ‘through silicon via’ (TSV). The TSV runs through the layers of HBM chips like an elevator runs through a building, greatly reducing the amount of time data bits need to travel. With the HBM sitting on the substrate right next to the CPU or GPU, less power is required to move data between CPU/GPU and memory. The CPU and HBM talk directly to each other, eliminating the need for DIMM sticks. “The whole idea that [we] had was instead of going very narrow and very fast, go very wide and very slow,” Macri said.


3 forces shaping the evolution of ERP

If there was any hesitation about moving to cloud-based ERP, it was quashed as the COVID crisis erupted, and corporate workplaces became scattered across countless home-based offices. On-premises ERP is seen as “not as scalable as people thought,” says Sharon Bhalaru, partner at accounting and technology consulting firm Armanino LLP. “We’re seeing a move to cloud-based systems,” to support remote employees who need to perform HR, financial and accounting tasks remotely. ... Next-generation ERP platforms “give companies real-time transparency with respect to sales, inventory, production, and financials,” the Boston Consulting Group analysts wrote. “Powerful data-driven analytics enables more agile decisions, such as adjustments to the supply chain to improve resilience. Robust e-commerce capabilities help companies better engage with online customers before and after a sale. And a lean ERP core and cloud-first approach increase deployment speed.” ... Unprecedented and ongoing supply chain disruptions underscore the need for greater visibility, more predictable lead times, alternative supply sources, and faster response to disruptions.


Interpol arrests thousands in global cyber fraud crackdown

The operation’s targets included telephone scammers, long-distance romance scammers, email fraudsters and other connected financial criminals, identified through a prior intelligence operation using Interpol’s secure global comms network, sharing data on suspects, suspicious bank accounts, unlawful transactions, and communications means such as phone numbers, email addresses, fake websites and IP addresses. “Telecom and BEC fraud are sources of serious concern for many countries and have a hugely damaging effect on economies, businesses and communities,” said Rory Corcoran. “The international nature of these crimes can only be addressed successfully by law enforcement working together beyond borders, which is why Interpol is critical to providing police the world over with a coordinated tactical response.” Duan Daqi, added: “The transnational and digital nature of different types of telecom and social engineering fraud continues to present grave challenges for local police authorities, because perpetrators operate from a different country or even continent than their victims and keep updating their fraud schemes.


Is Cyber Essentials Enough to Secure Your Organisation?

If you are to have confidence in your security controls, you must implement defence in depth. This requires a holistic approach to cyber security that addresses people, processes and technology. Key aspects of this aren’t addressed in Cyber Essentials, such as staff awareness training, vulnerability scanning and incident response. Employees are at the heart of any cyber security system, because they are the ones responsible for handling sensitive information. If they don’t understand their data protection requirements, it could result in disaster. Meanwhile, vulnerability scanning ensures that organisations can spot weaknesses in their systems before a cyber criminal can exploit them. It’s a more advanced form of protection than is offered with secure configuration and system updates, enabling organisations to proactively secure their systems. Conversely, incident response measures give organisations the tools they need to respond after a security incident has occurred. Most of the damage caused by a data breach occurs after the initial intrusion, so a prompt and organised response can be the difference between a minor disruption and a catastrophe.


Imagining a world without open standards

The open standard makes portability easier for software developers, provides integrators with choice in the building blocks for solutions, and enables customers to focus on solving business problems rather than integration issues. Open standards eliminate the need for organizations to expend energy wrangling with competitors on defining how systems should work, giving them the space and time to focus on building and improving how those systems actually do work. The real benefits, though, are downstream of vendors: open standards mean that businesses can effectively communicate and collaborate both internally and with peers. They mean that the expertise built up by a professional in one market or business can be taken with them wherever they want to work. They mean that a lack of knowledge resources is not the barrier that prevents businesses from making the move towards better, more efficient ways of working. In imagining a world without open standards, then, the image is one of businesses constantly having to navigate between the walled gardens of different technology vendors, reskilling and rehiring as they do so, before they can even begin the serious work of delivering value from that technology.


Good Habits That Every Programmer Should Have

We can become good at a specific technology by working with a particular technology for a long time. How can we become an expert in a specific technology? Learning internals is a great habit that supports us to become an expert in any technology. For example, after working some time with Git, you can learn Git internals via the lesser-known plumbing commands. You can make accurate technical decisions when you understand the internals of your technology stack. When you learn internals, you will indeed become more familiar with the limitations and workarounds of a specific technology. Learning internals also helps us to understand what we are doing with programming every day. Motivate everyone to learn further about their tools’ internals! ... Sometimes, we derive programming solutions from example code snippets that we can find on internet forums. It’s a good habit to give credit to other programmers’ hard work when we use their code snippets, libraries, and tools, even though their licensing documents say that attribution is not required.


Reducing Cybersecurity Security Risk From and to Third Parties

There are a number of ways in which organizations may be able to obtain attack information from third parties, if they agree. Ideally, such requirements should be included in service agreements and partnership contracts for vendors, outsourcers, and partners, as listed in the article, “Using Contracts to Reduce Cybersecurity Risks.” Employment contracts, nondisclosure agreements and license agreements may also include requirements that protect organizations against third-party risk. While it is helpful to request vendors, outsourcers and partners to commit to risk reduction in the contractual terms and conditions, it is even more beneficial for an organization to have direct access to partners’ and suppliers’ security monitoring systems. ... More modern forms of protection monitor messages for origin and content and respond with information about unauthorized sources—as with IDSs—or preventive action—as with IPSs. Advancements in these systems include observation of unusual behavior and the use of artificial intelligence (AI) to determine threats.


How Upskilling Could Resolve The Cybersecurity Skills Gap

With a shortage of new candidates, upskilling provides the answer to the cybersecurity skills gap. And it brings multiple benefits for both employees and businesses. One of the first is that, ultimately, cybersecurity is everyone’s business. From the CEO to the new employee at home, everyone has a role to play in ensuring systems are robust in the face of a growing wave of attacks. While this does not mean that everyone in a company needs to be a cybersecurity professional, it does mean that everyone should be aware of the risks, how to spot potential vulnerabilities and attacks and the practical measures they must take to prevent them. However, it can also produce a supply of cybersecurity professionals. Waiting for qualified entrants to the jobs market will take too long and, in practice, it’s likely they will not be qualified for long! The cybersecurity environment changes so rapidly, the knowledge many graduates gain at the start of their course may not be relevant by the end. Instead, identifying existing staff with the soft skills,or power skills, to develop, adapt, and learn may be the quickest and easiest path to take.


12 tips for achieving IT agility in the digital era

“If your tech stack is streamlined, easy to access, and easy to use, your workforce can quickly respond to business or customer needs seamlessly,” says Fleetcor’s duFour. Key to this is getting a handle on application sprawl by rationalizing the IT portfolio. Voya Financial’s simplification journey began with such an effort, a process that reduced its application footprint by 17% and its slate of technology tools by one quarter. The work continues as part of its cloud migration work. “This practice is instilling standards and discipline that will only help to ensure our environment remains uncluttered and contemporary for the long term,” Keshavan says. As a result, the IT group is faster and more flexible, recently deploying five new cloud services for data science and analytics developers to use within four hours —something that would have taken a cross-functional IT team several weeks to deploy in the past. Reining in application sprawl has also been valuable at Snow Software. “Oftentimes, companies and teams will invest in applications with similar purposes,” says Snow Software CIO Alastair Pooley. 


True Component-Testing of the GUI With Karate Mock Server

There’s an important reason why old-style end-to-end tests are often more expensive than needed: you tend to test paths that are not relevant to the frontend logic. Each of these adds to the total test suite run. Consider a web application for your tax return. The user journey in this non-trivial app consists of submitting a series of questionnaires, their content customized depending on what you answered in previous steps. There is likely some logic on the frontend to manage the turns in that user journey, but the number-crunching over your sources of income and deductibles surely happens on the backend. You don’t need a GUI test to validate the correctness of those calculations. With a mock backend that would be entirely pointless. You set it up to tell the frontend that the final amount to pay is 12600 Euros. You can test that this amount is properly displayed, but there’s no testing its correctness. All the decisions are made (and hopefully tested) elsewhere, so we can treat it as a hardcoded test fixture.



Quote for the day:

"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward

Daily Tech Digest - June 15, 2022

Software Engineering - The Soft Parts

Transferable skills are those you can take with you from project to project. Let's talk about them in relation to the fundamentals. The fundamentals are the foundation of any software engineering career. There are two layers to them - macro and micro. The macro layer is the core of software engineering and the micro layer is the implementation (e.g. the tech stack, libraries, frameworks, etc.). At a macro level, you learn programming concepts that are largely transferable regardless of language. The syntax may differ, but the core ideas are still the same. This can include things like: data-structures (arrays, objects, modules, hashes), algorithms (searching, sorting), architecture (design patterns, state management) and even performance optimizations. These are concepts you'll use so frequently that knowing them backwards can have a lot of value. At a micro level, you learn the implementation of those concepts. This can include things like: the language you use (JavaScript, Python, Ruby, etc), the frameworks you use (e.g. React, Angular, Vue etc), the backend you use (e.g. Django, Rails, etc), and the tech stack you use (e.g. Google App Engine, Google Cloud Platform, etc).


Why young tech workers leave — and what you can do to keep them

When employees seek a raise, what they’re really doing is shopping around and comparing offers from other companies, according to Sethi. And when it comes to salaries, companies must keep up with inflation, which is running at about 8% a year. But retaining employees requires more than just pay. Workers also want more support in translating environmental, social, and governance (ESG) considerations to their work. “Fulfilling work and the opportunity to be one’s authentic self at work also matter to employees who are considering a job change," Sethi said. "Pay is table stakes, but I also want my job to be meaningful and fulfilling, and I want to work at a place where I can be myself." Employees also want workplace flexibility. That, and human-centric work policies, can reduce attrition and increase performance. In fact, Gartner found that 65% of IT employees said that whether they can work flexibly affects their decision to stay at an organization.


A neuromorphic computing architecture that can run some deep neural networks more efficiently

Researchers at Graz University of Technology and Intel have recently demonstrated the huge potential of neuromorphic computing hardware for running DNNs in an experimental setting. Their paper, published in Nature Machine Intelligence and funded by the Human Brain Project (HBP), shows that neuromorphic computing hardware could run large DNNs 4 to 16 times more efficiently than conventional (i.e., non-brain inspired) computing hardware. "We have shown that a large class of DNNs, those that process temporally extended inputs such as for example sentences, can be implemented substantially more energy-efficiently if one solves the same problems on neuromorphic hardware with brain-inspired neurons and neural network architectures," Wolfgang Maass, one of the researchers who carried out the study, told TechXplore. "Furthermore, the DNNs that we considered are critical for higher level cognitive function, such as finding relations between sentences in a story and answering questions about its content." In their tests, Maass and his colleagues evaluated the energy-efficiency of a large neural network running on a neuromorphic computing chip created by Intel.


Why Your Database Needs a Machine Learning Brain

By keeping the ML at the database level, you’re able to eliminate several of the most time-consuming steps — and in doing so, ensure sensitive data can be analyzed within the governance model of the database. At the same time, you’re able to reduce the timeline of the project and cut points of potential failure. Furthermore, by placing ML at the data layer, it can be used for experimentation and simple hypothesis testing without it becoming a mini-project that requires time and resources to be signed off. This means you can try things on the fly, and not only increase the amount of insight but the agility of your business planning. By integrating the ML models as virtual database tables, alongside common BI tools, even large datasets can be queried with simple SQL statements. This technology incorporates a predictive layer into the database, allowing anyone trained in SQL to solve even complex problems related to time series, regression or classification models. In essence, this approach "democratizes" access to predictive data-driven experiences.


Understanding Low-code Development

If you are interested in getting started with low-code development, you will need a few things. First, you will need a low-code development platform. There are many options for you to select the right platform for you. You should analyze your requirements and explore all such options before choosing one. Several different options are available, so you should explore them to find one that meets your requirements. Once you have chosen a platform, you will need to learn how to use it. This may require some training or reading documentation. Finally, you will need some ideas for what you want to build. You are now ready to start low-code development. ... Here are some of the downsides of using Low-Code platforms for software development: Lack of Customization – Even though the pre-built modules of the low-code platforms are incredibly handy to work with, you can’t customize your application with them. You can customize low-code platforms but only to a limited extent. In most cases, low-code components are generic and if you want to customize your app you should invest time and effort in custom app development. 


Authentic Allyship and Intentional Leadership

Enterprises and leaders have to be intentional about their allyship. It has to be authentic allyship, not just surface allyship. I mention intentional allyship because a lot of times people think they’re an ally, and support diversity hires, but they’re just checking a box. We want intentional and authentic allyship. We need you to understand it goes beyond the person you’re helping. You’re helping the generation, not just one person. You think you’re only affecting the employee right in front of you, but that individual has a family and the next generation after them. You’re not just checking a box; you’re impacting destiny. When you’re an intentional ally, you think beyond the person in front of you, beyond the job application, beyond what you see. It’s not about you but what you’re doing for that person and that person’s generation to come. You need to really think about the step you’ll take when it comes to allyship. Make an impact – a lot of times we talk but don’t implement. Activate, implement, follow up. Don’t just implement and leave them there. Follow up – ask them how they’re doing, and if they know anyone else you can bring in. 


Software engineering estimates are garbage

Garbage estimates don’t account for the humanity of the people doing the work. Worse, they imply that only the system and its processes matter. This ends up forcing bad behaviors that lead to inferior engineering, loss of talent, and ultimately less valuable solutions. Such estimates are the measuring stick of a dysfunctional culture that assumes engineers will only produce if they’re compelled to do so—that they don’t care about their work or the people they serve. Falling behind the estimate’s promises? Forget about your family, friends, happiness, or health. It’s time to hustle and grind. Can’t craft a quality solution in the time you’ve been allotted? Hack a quick fix so you can close out the ticket. Solving the downstream issues you’ll create is someone else’s problem. Who needs automated tests anyway? Inspired with a new idea of how this software could be built better than originally specified? Keep it to yourself so you don’t mess up the timeline. Bludgeon people with the estimate enough, and they’ll soon learn to game the system.


Return to the office or else? Why bosses' ultimatums are missing the point

Employers who insist their staff return to the office full time are heading into increasingly dangerous territory. Skilled professionals, tech workers included, have so many opportunities available to them right now that it's difficult to see why they would sacrifice job satisfaction for their bosses. The outlook has never been better for knowledge workers – and indeed, workers more generally – across all industries. Not only are employers paying more to get the skills they need, but the breadth of flexible-working options for employees fed up with office life continues to grow. People aren't just working from home – they're working from wherever they choose, and whenever they choose. At the same time, significant momentum is gathering behind the introduction of a four-day work week, which could push the dynamic even further in favour of worker wellbeing while benefitting employers too. Companies who offer 100% pay for 80% of the hours will have a seriously powerful bargaining chip to play in the war for talent, and no company – regardless of their brand, product or credentials – will be untouchable.


UK needs to upskill to achieve quantum advantage

Discussing the pilot, Stephen Till, fellow at the Defence Science and Technology Laboratory (Dstl), an executive agency of the MoD, said: “This work with ORCA Computing is a milestone moment for the MoD. Accessing our own quantum computing hardware will not only accelerate our understanding of quantum computing, but the computer’s room-temperature operation will also give us the flexibility to use it in different locations for different requirements. “We expect the ORCA system to provide significantly improved latency – the speed at which we can read and write to the quantum computer. This is important for hybrid algorithms, which require multiple handovers between quantum and classical systems.” Piers Clinton-Tarestad, a partner in EY’s technology risk practice, said there is a general consensus that quantum computing will start becoming a reality in 2030. But pilot projects, such as the one being conducted at the MoD, and proof-of-concept applications can help business leaders to understand where quantum technology can be applied. 


Using automation to improve employee experience

The possibilities to improve the employee experience through automation and integration are endless. If you want to pilot something in your organization, poll your employees about what would be the most impactful. Where are they seeing sludge that drags down morale and slows business velocity? You and your IT team can plot each idea on an impact and effort prioritization matrix. Some suggestions may be easier to implement than you think, as many cloud services are already API-enabled, making automation straightforward. Once your team implements an initial valuable and visible integration, more employee lightbulbs will go off, identifying additional ideas for automation and integration for your prioritization backlog. And don’t forget about the ROI calculators in your automation tooling, as they will help objectively refine your prioritization by analyzing your planned and actual savings. Not only will your employees benefit directly from the automation, but they will also feel heard when they see their ideas come to life.



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - June 14, 2022

Business Architecture - A New Depiction

Crucial to this depiction are components which exist in both the vertical pillars and the horizontal Business Architecture layer as follows: Application Architecture: includes the Business Process component, to associate application components (logical & operational) with the business activity they support. Information Architecture: includes the Information Component from a business perspective separately from any logical or operational representation of that information by data (structured or unstructured). Infrastructure Architecture: contains the location component. This is to recognize that business infrastructure is linked to an organization / location either by physical installation or network access. Business Architecture consists of these business components – shared with the other domains – and, in addition, more complex views which link the architecture with the business plans. For example, an architecture view for a business capability (as defined through capability-based planning) would show how the components support that capability. The 3 vertical domains can be considered to constitute IT Architecture (for the enterprise). 


Meet Web Push

One goal of the WebKit open source project is to make it easy to deliver a modern browser engine that integrates well with any modern platform. Many web-facing features are implemented entirely within WebKit, and the maintainers of a given WebKit port do not have to do any additional work to add support on their platforms. Occasionally features require relatively deep integration with a platform. That means a WebKit port needs to write a lot of custom code inside WebKit or integrate with platform specific libraries. For example, to support the HTML <audio> and <video> elements, Apple’s port leverages Apple’s Core Media framework, whereas the GTK port uses the GStreamer project. A feature might also require deep enough customization on a per-Application basis that WebKit can’t do the work itself. For example web content might call window.alert(). In a general purpose web browser like Safari, the browser wants to control the presentation of the alert itself. But an e-book reader that displays web content might want to suppress alerts altogether. From WebKit’s perspective, supporting Web Push requires deep per-platform and per-application customization.


Introduction to Infrastructure as Code - Part 1: Introducing IaC

In recent years, development has shifted away from monolithic applications and towards microservices architectures and cloud-native applications. However, modernizing apps introduces complexity, as maintaining the cloud computing architecture requires infrastructure automation tools, efficient provisioning, and scaling of new resources. Too many developers still see infrastructure provisioning and management as an opaque process that Ops teams perform using GUI tools like the Azure Portal. Infrastructure as code (IaC) challenges that notion. The practice of IaC unifies development and operations, creating a close bond between code and infrastructure. Why should we use IaC? When you develop an application, you create code, build and version it, and deploy the artifact through the DevOps pipeline. IaC allows you to create your infrastructure in the cloud using code, enabling you to version and execute that code whenever necessary. This three-article series starts with an introduction to IaC. Then, the following two articles in this series show how to use the Bicep language and Terraform HCL syntax to create templates and automatically provision resources on Azure.


VPN providers flee Indian market ahead of new data rules

The new directive by India's top cybersecurity agency, the Indian Computer Emergency Response Team (Cert-In), requires VPN, Virtual Private Server (VPS) and cloud service providers to store customers' names, email addresses, IP addresses, know-your-customer records, and financial transactions for a period of five years. SurfShark announced on Wednesday in a post titled "Surfshark shuts down servers in India in response to data law," that it "proudly operates under a strict "no logs" policy, so such new requirements go against the core ethos of the company." SurfShark is not the first VPN provider to pull its servers from the country following the directive. ExpressVPN also decided to take the same step just last week, and NordVPN has also warned that it will be removing physical servers if the directives are not reversed. ... Like many businesses around the world, Indian companies have increased their reliance on VPNs since the COVID-19 pandemic forced many employees to work from home. VPN adoption grew to allow employees to access sensitive data remotely, even as companies started adopting other secure means to allow remote access such as Zero Trust Network Access and Smart DNS solutions.


5 top deception tools and how they ensnare attackers

To work, deception technologies essentially create decoys, traps that emulate natural systems. These systems work because of the way most attackers operate. For instance, when attackers penetrate the environment, they typically look for ways to build persistence. This typically means dropping a backdoor. In addition to the backdoor, attackers will attempt to move laterally within organizations, naturally trying to use stolen or guessed access credentials. As attackers find data and systems of value, they will deploy additional malware and exfiltrate data, typically using the backdoor(s) they dropped. With traditional anomaly detection and intrusion detection/prevention systems, enterprises try to spot these attacks in progress on their entire networks and systems. Still, the problem is these tools rely on signatures or susceptible machine learning algorithms and throw off a tremendous number of false positives. Deception technologies, however, have a higher threshold to trigger events, but these events tend to be real threat actors conducting real attacks.


MIT built a new reconfigurable AI chip that can reduce electronic waste

The team's optical communication system comprises paired photodetectors and LEDs patterned with tiny pixels. The photodetectors feature an image sensor for receiving data, and LEDs transmit that data to the next layer. Since the components must work like a LEGO-like reconfigurable AI chip, they must be compatible. "The sensory chip at the bottom receives signals from the outside environment and sends the information to the next chip above by light signals. The next chip, which is a processor layer, receives the light information and then processes the pre-programmed function. Such light-based data transfer continues to other chips above, thus performing multi-functional tasks as a whole," the team explained. ... The team fabricated a single chip with a computing core that measured about four square millimeters. The chip is stacked with three image recognition "blocks", each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response.


Augmented reality head-up displays: Navigating the next-gen driving experience

HUDs work by projecting a transparent 2D or 3D digital image of navigational and hazard warning information, for example, onto the windscreen of the vehicle. These projected images then merge with the driver's view of the road ahead. Windshield HUDs, for example, are set up so that the driver does not need to shift their gaze away from the road in order to view the relevant, timely information. This technology helps to keep the driver's attention on the road, as opposed to the driver having to look down at the dashboard or navigation system. Technological advances in this area have led to HUDs with holographic displays and AR in 3D. This added depth perception makes it possible to project computer-generated virtual objects in real time into the driver's field of view to warn, inform or entertain the user. The driver's alertness to road obstacles is increased by enabling shorter obstacle visualization times, and eye strain and driving stress levels are reduced. "Holographic HUDs are paramount if we are to explore the possibilities of augmented and mixed reality for road safety," said Jana


Nigerian Police Bust Gang Planning Cyberattacks on 10 Banks

The operation was a coordinated effort between the Economic and Financial Crimes Commission of Nigeria, Interpol, the National Central Bureaus and law enforcement agencies of 11 countries across Southeast Asia, according to Interpol. The operation was initiated after Interpol's private sector partner Trend Micro provided operational intelligence to the agency about the "emergence and usage of Agent Tesla malware" in this case. Agent Tesla was found on the mobile phones and laptops of the syndicate members that were seized by the EFCC during the bust. "Through its global police network and constant monitoring of cyberspace, Interpol had the globally sourced intelligence needed to alert Nigeria to a serious security threat where millions could have been lost without swift police action," Interpol Director of Cybercrime Craig Jones says in the statement. "Further arrests and prosecutions are foreseen across the world as intelligence continues to come in and investigations unfold." 


10 ways DevOps can help reduce technical debt

In most cases, technical debt occurs because development teams take shortcuts to meet tight deadlines and struggle with constant changes. But better collaboration between dev and ops can shorten SDLC, fasten deployments, and increase their frequency. Moreover, CI/CD and continuous testing make it easier for teams to deal with changes. Overall, the collaborative culture encourages code reviews, good coding practices, and robust testing with mutual help. ... Technical debt is best controlled when managed continuously, which becomes easier with DevOps. As it facilitates constant communication, teams can track debt, facilitate awareness and resolve it as soon as possible. Team leaders can also include technical debt review into backlog and schedule maintenance sprints to deal with it promptly. Moreover, DevOps reduces the chances of incomplete or deferred tasks in the backlog, helping prevent technical debt. ... A true DevOps culture can be the key to managing technical debt over long periods. DevOps culture encourages strong collaboration between cross-functional teams, provides autonomy and ownership, and practices continuous feedback and improvement.


Once is never enough: The need for continuous penetration testing

The traditional attitude to manual pen testing is kind of like the traditional approach to driving navigation: nothing can replace the sophistication and accrued knowledge of a human. A taxi driver will always beat Google Maps, and a trained pen testing professional will find vulnerabilities and attacks that automated tests may miss, or identify responses that appear legitimate to automated software but are actually a threat. The truth is, on a case-by-case basis, this could conceivably be true. But with off-the-shelf tools and services like RaaS (Ransomware as a Service) or MaaS (Malware as a Service) that use AI/ML capabilities to enhance attack efficiency – you’d need an army of pen testers to truly meet the challenges of today’s cyber threats. And once you’d found, trained and employed them – cyberattackers would simply increase their automation efforts and you’d need to draft another army. Not a sustainable cybersecurity model, clearly. Similarly, the widescale adoption of agile development methodologies has translated into increasingly frequent software releases.



Quote for the day:

"If you are truly a leader, you will help others to not just see themselves as they are, but also what they can become." -- David P. Schloss

Daily Tech Digest - June 13, 2022

The Increasingly Graphic Nature Of Intel Datacenter Compute

What customers are no doubt telling Intel and AMD is that they want highly tuned pieces of hardware co-designed with very precise workloads, and that they will want them at much lower volumes for each multi-motor configuration than chip makers and system builders are used to. Therefore, these compute engine complexes we call servers will carry higher unit costs than chip makers and system builders are used to, but not necessarily with higher profits. In fact, quite possibly with lower profits, if you can believe it. This is why Intel is taking a third whack at discrete GPUs with its Xe architecture and significantly with the “Ponte Vecchio” Xe HPC GPU accelerator that is at the “Aurora” supercomputer at Argonne National Laboratory. And this time the architecture of the GPUs is a superset of the integrated GPUs for its laptops and desktops, not some Frankenstein X86 architecture that is not really tuned for graphics even if it could be used as a massively parallel compute engine in a way that GPUs have been transformed from Nvidia and AMD. 


Under the hood: Meta’s cloud gaming infrastructure

Our goal within each edge computing site is to have a unified hosting environment to make sure we can run as many games as possible as smoothly as possible. Today’s games are designed for GPUs, so we partnered with NVIDIA to build a hosting environment on top of NVIDIA Ampere architecture-based GPUs. As games continue to become more graphically intensive and complex, GPUs will provide us with the high fidelity and low latency we need for loading, running, and streaming games. To run games themselves, we use Twine, our cluster management system, on top of our edge computing operating system. We build orchestration services to manage the streaming signals and use Twine to coordinate the game servers on edge. We built and used container technologies for both Windows and Android games. We have different hosting solutions for Windows and Android games, and the Windows hosting solution comes with the integration with PlayGiga. We’ve built a consolidated orchestration system to manage and run the games for both operating systems.


Google AI Introduces ‘LIMoE’

A typical Transformer comprises several “blocks,” each containing several distinct layers. A feed-forward network is one of these layers (FFN). This single FFN is replaced in LIMoE and the works described above by an expert layer with multiple parallel FFNs, each of which is an expert. A primary router predicts which experts should handle which tokens, given a series of passes to process. ... The model’s price is comparable to the regular Transformer model if only one expert is activated. LIMoE performs exactly that, activating one expert per case and matching the dense baselines’ computing cost. The LIMoE router, on the other hand, may see either image or text data tokens. When MoE models try to deliver all tokens to the same expert, they fail uniquely. Auxiliary losses, or additional training objectives, are commonly used to encourage balanced expert utilization. Google AI team discovered that dealing with numerous modalities combined with sparsity resulted in novel failure modes that conventional auxiliary losses could not solve. To address this, they created additional losses.


Stop Splitting Yourself in Half: Seek Out Work-Life Boundaries, Not Balance

What makes boundaries different from balance? Balance implies two things that aren't equal that you're constantly trying to make equal. It creates the expectation of a clear-cut division. A work-life balance fails to acknowledge that you are a whole person, and sometimes things can be out of balance without anything being wrong. Sometimes you'll spend days, weeks and even whole seasons of life choosing to lean more into one part of your life than the other. Boundaries ask you to think about what's important to you, what drives you, and what authenticity looks like for you. Boundaries require self-awareness and self-reflection, along with a willingness and ability to prioritize. Those qualities help you to be more aware and more capable of making decisions at a given moment. By establishing boundaries grounded in your priorities, you're more equipped to make choices. Boundaries empower you to say, "This is what I'm choosing right now. I need to be fully here until this is done." Boundaries aren't static, either. 


Why it’s time for 'data-centric artificial intelligence'

AI systems need both code and data, and “all that progress in algorithms means it's actually time to spend more time on the data,” Ng said at the recent EmTech Digital conference hosted by MIT Technology Review. Focusing on high-quality data that is consistently labeled would unlock the value of AI for sectors such as health care, government technology, and manufacturing, Ng said. “If I go see a health care system or manufacturing organization, frankly, I don't see widespread AI adoption anywhere.” This is due in part to the ad hoc way data has been engineered, which often relies on the luck or skills of individual data scientists, said Ng, who is also the founder and CEO of Landing AI. Data-centric AI is a new idea that is still being discussed, Ng said, including at a data-centric AI workshop he convened last December. ... Data-centric AI is a key part of the solution, Ng said, as it could provide people with the tools they need to engineer data and build a custom AI system that they need. “That seems to me, the only recipe I'm aware of, that could unlock a lot of this value of AI in other industries,” he said.


How Do We Utilize Chaos Engineering to Become Better Cloud-Native Engineers?

The main goal of Chaos Engineering is as explained here: “Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production.” The idea of Chaos Engineering is to identify weaknesses and reduce uncertainty when building a distributed system. As I already mentioned above, building distributed systems at scale is challenging, and since such systems tend to be composed of many moving parts, leveraging Chaos Engineering practices to reduce the blast radius of such failures, proved itself as a great method for that purpose. We leverage Chaos Engineering principles to achieve other things besides its main objective. The “On-call like a king” workshops intend to achieve two goals in parallel—(1) train engineers on production failures that we had recently; (2) train engineers on cloud-native practices, tooling, and how to become better cloud-native engineers!


The 3 Phases of Infrastructure Automation

Manually provisioning and updating infrastructure multiple times a day from different sources, in various clouds or on-premises data centers, using numerous workflows is a recipe for chaos. Teams will have difficulty collaborating or even sharing a view of the organization’s infrastructure. To solve this problem, organizations must adopt an infrastructure provisioning workflow that stays consistent for any cloud, service or private data center. The workflow also needs extensibility via APIs to connect to infrastructure and developer tools within that workflow, and the visibility to view and search infrastructure across multiple providers. ... The old-school, ticket-based approach to infrastructure provisioning makes IT into a gatekeeper, where they act as governors of the infrastructure but also create bottlenecks and limit developer productivity. But allowing anyone to provision infrastructure without checks or tracking can leave the organization vulnerable to security risks, non-compliance and expensive operational inefficiencies.


Questioning the ethics of computer chips that use lab-grown human neurons

While silicon computers transformed society, they are still outmatched by the brains of most animals. For example, a cat’s brain contains 1,000 times more data storage than an average iPad and can use this information a million times faster. The human brain, with its trillion neural connections, is capable of making 15 quintillion operations per second. This can only be matched today by massive supercomputers using vast amounts of energy. The human brain only uses about 20 watts of energy, or about the same as it takes to power a lightbulb. It would take 34 coal-powered plants generating 500 megawatts per hour to store the same amount of data contained in one human brain in modern data storage centres. Companies do not need brain tissue samples from donors, but can simply grow the neurons they need in the lab from ordinary skin cells using stem cell technologies. Scientists can engineer cells from blood samples or skin biopsies into a type of stem cell that can then become any cell type in the human body.


How Digital Twins & Data Analytics Power Sustainability

Seeding technology innovation across an enterprise requires broader and deeper communication and collaboration than in the past, says Aapo Markkanen, an analyst in the technology and service providers research unit at Gartner. “There’s a need to innovate and iterate faster, and in a more dynamic way. Technology must enable processes such as improved materials science and informatics and simulations.” Digital twins are typically at the center of the equation, says Mark Borao, a partner at PwC. Various groups, such as R&D and operations, must have systems in place that allow teams to analyze diverse raw materials, manufacturing processes, and recycling and disposal options --and understand how different factors are likely to play out over time -- and before an organization “commits time, money and other resources to a project,” he says. These systems “bring together data and intelligence at a massive scale to create virtual mirrored worlds of products and processes,” Podder adds. In fact, they deliver visibility beyond Scope 1 and Scope 2 emissions, and into Scope 3 emissions.


API security warrants its own specific solution

If the API doesn’t apply sufficient internal rate limiting on parameters such as response timeouts, memory, payload size, number of processes, records and requests, attackers can send multiple API requests creating a denial of service (DoS) attack. This then overwhelms back-end systems, crashing the application or driving resource costs up. Prevention requires API resource consumption limits to be set. This means setting thresholds for the number of API calls and client notifications such as resets and lockouts. Server-side, validate the size of the response in terms of the number of records and resource consumption tolerances. Finally, define and enforce the maximum size of data the API will support on all incoming parameters and payloads using metrics such as the length of strings and number of array elements. Effectively a different spin on BOLA, this sees the attacker able to send requests to functions that they are not permitted to access. It’s effectively an escalation of privilege because access permissions are not enforced or segregated, enabling the attacker to impersonate admin, helpdesk, or a superuser and to carry out commands or access sensitive functions, paving the way for data exfiltration.



Quote for the day:

"To make a decision, all you need is authority. To make a good decision, you also need knowledge, experience, and insight." -- Denise Moreland

Daily Tech Digest - June 11, 2022

Cloud computing security: Where it is, where it's going

Most businesses use multiple cloud services and cloud providers, a hybrid approach that can support granular security options where vital data is kept close (perhaps in a private cloud) while less sensitive applications run in a public cloud to take advantage of big tech's economies of scale. But the hybrid model also introduces new complications, as every provider will have a slightly different set of security models that cloud customers will need to understand and manage. That takes time and (often elusive) expertise. But misconfigured services are high on the list of the causes for security incidents, along with even more basic failures like poor passwords and identity controls. Little surprise that companies are evaluating tools to automate much of this. That's leading to interest in new technologies such as Cloud Security Posture Management (CSPM) tools, which can help security teams spot and fix potential security issues around misconfiguration and compliance in the cloud, so they know the same rules are being enforced across their cloud services.


Jump Into the DevOps Pool: The Water Is Fine

If you’re thinking that becoming a member of a DevOps team sounds interesting, what are the things you need to consider? Having experience in just about any aspect of IT gives you the technical foundation to make yourself a viable candidate. Do some research. What does it take to hone your existing skills to become a successful member of a DevOps team? You’ll likely find that it takes you in a direction well within your reach. Your technical skills are just the beginning though. Your skills will contribute to the broader objective of the DevOps team. Valuable DevOps team members understand how their role fits into the bigger picture. It’s not necessary to know the details of another team member’s discipline. It is, however, important to understand how each of your roles contributes to the DevOps process. This implies that you take some time to learn about each role’s function. Becoming an invaluable DevOps team member goes one step further. DevOps engineers who possess or develop the interpersonal skills to work beyond their team in guiding others, become key players within an organization. 


How to prioritize cloud spending: 5 strategies for architects

The price of spot instances changes over days and weeks, so you can't predict the cost at the time of purchase. The amount of money saved varies depending on the type of resource: Low-priority instances are the least expensive, but they may be unavailable or turn off abruptly depending on capacity demand in the region. But such cases are rare. For example, AWS states that the average interruption frequency across all regions and instance types doesn't exceed 10%. Spot instances are best for stateless workloads, batch operations, and other fault-tolerant or time-flexible tasks. ... Begin by examining your cloud provider's transfer fees. Then, find ways to limit the number of data transfers in your cloud architecture. For example, you may need to change your application behavior and architecture to use computing resources in the closest data location. Transfer on-premises apps that often access cloud-hosted data to the cloud. In contrast to the cloud, specific resources (such as network bandwidth) are considered free in traditional datacenters. So if you move applications from on-premises datacenters, modify your application architecture to limit the amount of data transferred.


Defensive Cyber Attacks Declared Legal by UK AG

The move highlights a general lack of international agreement about when defensive cyber attacks should be considered appropriate. There has long been a murky world of online espionage in which countries have tacitly agreed to not respond with military force, due in no small part to degrees of plausible deniability and a great difficulty in displaying concrete evidence to the public that another nation’s covert hacking teams were behind a virtual break-in. This unofficial understanding has survived in the internet age, even as allies have been caught spying on each other, so long as everyone refrained from using cyber attacks to cause physical damage. Some developments in recent years have strained that arrangement, including Russia’s repeated cyber attacks on services in Ukraine and the recent willingness of cyber criminals to hit foreign critical infrastructure and government agencies with ransomware attacks. The UK AG has expressed that there is a pressing need to establish formal rules regarding defensive cyber attacks given the demonstrated possibility of devastating incidents that could cause actual damage to civilians, and that existing non-intervention agreements could serve as a launch point.


How AI can give companies a DEI boost

Although many companies are experimenting with AI as a tool to assess DEI in these areas, Greenstein noted, they aren’t fully delegating those processes to AI, but rather are augmenting them with AI. Part of the reason for their caution is that in the past, AI often did more harm than good in terms of DEI in the workplace, as biased algorithms discriminated against women and non-white job candidates. “There has been a lot of news about the impact of bias in the algorithms looking to identify talent,” Greenstein said. For example, in 2018, Amazon was forced to scrap its secret AI recruiting tool after the tech giant realized it was biased against women. And a 2019 study conducted by Harvard Business Review concluded that AI-enabled recruiting algorithms introduced anti-Black bias into the process. AI bias is caused, often unconsciously, by the people who design AI models and interpret the results. If an AI is trained on biased data, it will, in turn, make biased decisions. For instance, if a company has hired mostly white, male software engineers with degrees from certain universities in the past, a recruiting algorithm might favor job candidates with similar profiles for open engineering positions.


A CFO’s perspective on sustainable, inclusive growth

We’ve faced an ongoing health crisis that turned into a social crisis that went to an economic crisis and, unfortunately, we’re facing humanitarian crises, such as the war in Ukraine. But the fact of the matter is, people are making decisions, different decisions than where we were three to five years ago. And I believe they’re challenging the purpose of organizations, businesses, and leadership. As we talk about sustainability and inclusivity with that combination of the foundation for growth, that’s what the priorities of people are today. You asked about today’s CFOs and sustainability, inclusivity, growth. I truly believe that history will be written about these times that we’ve been operating in. As CFOs, we’re always—Eric, as you know quite well—focused on the what: productivity, efficiency, operational stability, liquidity. But I think these times will be less about pure financials and more about a culture. And when I think about culture, IBM—let me give a little shout out to my company—has a framework. We’ve been in existence for 111 years. We have a framework around culture that’s really grounded in purpose, united in values, and demonstrated through growth behaviors. 


Container adoption: 5 expert tips

“If you want to move beyond containers as a tool for developers and put them into production, that means you’ll also be adopting an orchestration layer like Kubernetes and the various monitoring, CI/CD, logging, and tracing tools that go with it,” Haff says. “Which is exactly what many organizations are doing.” Containers and Kubernetes tend to go hand-in-hand because without that orchestration layer, teams otherwise find that managing containers at any kind of scale in production requires untenable effort. Haff notes that 70 percent of IT leaders surveyed in the State of Enterprise Open Source 2022 report said their organizations were using Kubernetes. Speaking of open source, containerization has open source DNA – and adoption often leads to uptake of other open source technologies, too. Make sure you’re using up-to-date, reliable, and secure code. “Containerization leads to more use of open source and other public components,” Korren says. “There are a lot of useful, well-maintained code components on the Internet, but there are many that are not.”


Create End-To-End Integration Of Tools & Data For Flow Insight & Traceability

Without a long-term strategy or clearly assigned data-custody across the digital product lifecycle, data access and management is fragmented between process owners, application owners, or development teams, becoming more unstable with every company re-organization or staff departure. Many organizations reluctantly determine that data islands, duplicate data stores, and conflicting data are inevitable. The chain reaction of resulting issues is both overwhelming and costly. It may not be possible to do a meaningful root cause analysis to resolve incidents, assess the efficiency of digital product delivery, assess the value compared with cost, or receive valuable feedback from development before deployment. Design flaws are repeated, and incorrect processes are unintentionally reinforced. The lack of end-to-end visibility results in a slow response time to development, change, and incident tickets because there is no traceability or data integrity for tracking down the root cause of problems. Add that when data ownership is transferred or unclear, frustrated teams may dodge responsibility and throw issues “over the fence” to other stakeholders through the course of the digital product’s lifecycle.


Using Behavioral Analytics to Bolster Security

Josh Martin, product evangelist at security firm Cyolo, explains that behavioral analytics would not be possible without ML and AI. “The data collected from the detection phase will be fed into multiple AI and ML models that will allow for deeper inspection of access habits to detect patterns or outliers for specific users,” he says. He outlines a potential use case for behavioral analytics and zero trust focused on a team member working from home. This user logs in every day from their corporate Mac around 8:00 in the morning and will either log into Salesforce or O365 first thing. “Considering this is normal for the user, the AI/ML mechanisms will start to look for anything outside of this baseline,” Martin says. “So, when the user takes a vacation to a different state and uses a personal Windows laptop to access ADP around 10 o’clock at night, this would raise a flag and shut down further authentication attempts until a security analyst can investigate. In this case, it could have been a malicious entity using stolen credentials to access payroll information.” From his perspective, behavioral analytics is likely to become the new norm as AI/ML products and knowledge become more accessible to the masses.


Rekindling the thrill of programming

We could say that programming is an activity that moves between the mental and the physical. We could even say it is a way to interact with the logical nature of reality. The programmer blithely skips across the mind-body divide that has so confounded thinkers. “This admitted, we may propose to execute, by means of machinery, the mechanical branch of these labours, reserving for pure intellect that which depends on the reasoning faculties.” So said Charles Babbage, originator of the concept of a digital programmable computer. Babbage was conceiving of computing in the 1800s. Babbage and his collaborator Lovelace were conceiving not of a new work, but a new medium entirely. They wrangled out of the ether a physical ground for our ideations, a way to put them to concrete test and make them available in that form to other people for consideration and elaboration. In my own life of studying philosophy, I discovered the discontent of thought form whose rubber never meets the road. In this vein, Mr. Brooks completes his thought above when he writes, “Yet the program construct, unlike the poet’s words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself.”



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis

Daily Tech Digest - June 10, 2022

Everything You Need to Know About Enterprise Architecture vs. Project Management

Even though both have their own set of specialized skills, they still correlate in certain areas. Sometimes different teams are working on various initiatives or parts of a landscape. In the middle of the project, they find out that each team needs to work on the same bit of the software or service ... However, to execute such a situation without any mishap needs some coordination and a good system in place to foresee these dependencies. Since it is hard to keep track of all the dependencies and some might come to bite you from the back later. This is where enterprise architecture is needed. Enterprise architects are usually well aware of these relationships and with their expertise in architecture models, they can uncover these dependencies better. Such dependencies are usually unknown to the project or program managers. Therefore, this is where enterprise architect vs. Project management correlates. Enterprise architecture is about managing the coherence of your business whereas project management is responsible for planning and managing usually from the financial and resource perspective.


A Minimum Viable Product Needs a Minimum Viable Architecture

In short, as the team learns more about what the product needs to be, they only build as much of the product and make as few architectural decisions as is absolutely essential to meet the needs they know about now; the product continues to be an MVP, and the architecture continues to be an MVA supporting the MVP. The reason for both of these actions is simple: teams can spend a lot of time and effort implementing features and QARs in products, only to find that customers don’t share their opinion on their value; beliefs in what is valuable are merely assumptions until they are validated by customers. This is where hypotheses and experiments are useful. In simplified terms, a hypothesis is a proposed explanation for some observation that has not yet been proven (or disproven). In the context of requirements, it is a belief that doing something will lead to something else, such as delivering feature X will lead to outcome Y. An experiment is a test that is designed to prove or reject some hypothesis.


In Search of Coding Quality

The major difference between good- and poor-quality coding is maintainability, states Kulbir Raina, Agile and DevOps leader at enterprise advisory firm Capgemini. Therefore, the best direct measurement indicator is operational expense (OPEX). “The lower the OPEX, the better the code,” he says. Other variables that can be used to differentiate code quality are scalability, readability, reusability, extensibility, refactorability, and simplicity. Code quality can also be effectively measured by identifying technical-debt (non-functional requirements) and defects (how well the code aligns to the laid specifications and functional requirements,” Raina says. “Software documentation and continuous testing provide other ways to continuously measure and improve the quality of code using faster feedback loops,” he adds. ... The impact development speed has on quality is a question that's been hotly debated for many years. “It really depends on the context in which your software is running,” Bruhmuller says. Bruhmuller says his organization constantly deploys to production, relying on testing and monitoring to ensure quality.


A chip that can classify nearly 2 billion images per second

While current, consumer-grade image classification technology on a digital chip can perform billions of computations per second, making it fast enough for most applications, more sophisticated image classification such as identifying moving objects, 3D object identification, or classification of microscopic cells in the body, are pushing the computational limits of even the most powerful technology. The current speed limit of these technologies is set by the clock-based schedule of computation steps in a computer processor, where computations occur one after another on a linear schedule. To address this limitation, Penn Engineers have created the first scalable chip that classifies and recognizes images almost instantaneously. Firooz Aflatouni, Associate Professor in Electrical and Systems Engineering, along with postdoctoral fellow Farshid Ashtiani and graduate student Alexander J. Geers, have removed the four main time-consuming culprits in the traditional computer chip: the conversion of optical to electrical signals, the need for converting the input data to binary format, a large memory module, and clock-based computations.


Scrum, Remote Teams, & Success: Five Ways to Have All Three

Agile teams have long made use of team agreements (or team working agreements). These set ground rules for the team, created by the team and enforced by the team. When our working environment shifts as much as it has recently, consider establishing some new team agreements specifically designed to address remote work. Examples? On-camera expectations, team core working hours (especially if you’re spread across multiple time zones) and setting aside focus time during which interruptions are kept to a minimum. ... One of the huge disadvantages of a remote team is the lack of personal connections that are made just grabbing a cup of coffee or standing around the water cooler. Remote teams need to be deliberate about counteracting isolation. Consider taking the first few minutes of a meeting to talk about anything non-work related. Set up a time for a team show-and-tell in which each team member can share something from their home or background in their home office that matters to them. Find excuses for the team to share anything that helps teammates get to know each other more—as human beings, not just co-workers. 


Cisco introduces innovations driving new security cloud strategy

Ushering in the next generation of zero trust, Cisco is building solutions that enable true continuous trusted access by constantly verifying user and device identity, device posture, vulnerabilities, and indicators of compromise. These intelligent checks take place in the background, leaving the user to work without security getting in the way. Cisco is introducing less intrusive methods for risk-based authentication, including the patent-pending Wi-Fi fingerprint as an effective location proxy without compromising user privacy. To evaluate risk after a user logs in, Cisco is building session trust analysis using the open Shared Signals and Events standards to share information between vendors. Cisco unveiled the first integration of this technology with a demo of Cisco Secure Access by Duo and Box. “The threat landscape today is evolving faster than ever before,” said Aaron Levie, CEO and Co-founder of Box. “We are excited to strengthen our relationship with Cisco and deliver customers with a powerful new tool that enables them to act on changes in risk dynamically and in near real-time.


10 key roles for AI success

The domain expert has in-depth knowledge of a particular industry or subject area. This person is an authority in their domain, can judge the quality of available data, and can communicate with the intended business users of an AI project to make sure it has real-world value. These subject matter experts are essential because the technical experts who develop AI systems rarely have expertise in the actual domain the system is being built to benefit, says Max Babych, CEO of software development company SpdLoad. ... When Babych’s company developed a computer-vision system to identify moving objects for autopilots as an alternative to LIDAR, they started the project without a domain expert. Although research proved the system worked, what his company didn’t know was that car brands prefer LIDAR over computer vision because of its proven reliability, and there was no chance they would buy a computer vision–based product. “The key advice I’d like to share is to think about the business model, then attract a domain expert to find out if it is a feasible way to make money in your industry — and only after that try to discuss more technical things,” he says.


Be Proactive! Shift Security Validation Left

When security testing only kicks in at the end of the SDLC, the delays caused in deployment due to uncovered critical security gaps cause rifts between DevOps and SOC teams. Security often gets pushed to the back of the line, and there's not much collaboration when introducing a new tool, or method, such as launching occasional simulated attacks against the CI/CD pipeline. Conversely, once a comprehensive continuous security validation approach is baked in the SDLC, daily invoking attack techniques emulations through the automation built-in XSPM technology identify misconfiguration early in the process, incentivizing close collaboration between DevSecOps and DevOps. With built-in inter-team collaboration across both security and software development lifecycle, working with immediate visibility on security implications, the goal alignment of both teams eliminates erstwhile strife and friction born of internal politics. Shifting extreme left with comprehensive continuous security validation enables you to begin mapping and to understand the investments made in various detection and response technologies and implementing findings to preempt attack techniques across the kill chain and protect real functional requirements.


Unlocking the ‘black box’ of education data

Technology enables education leaders to understand a child’s learning journey in a way that hasn’t been previously possible. Be this through logging the time a child spends on a certain task, recording areas that students consistently do well or poorly in, or by noting hours spent in extra-curricular programmes. Edtech allows the collection and centralisation of data on a child across their years spent in school. This data can then be used to build up a holistic picture of the student’s learning to share with everyone who supports that pupil, from teachers, parents and carers to learning support assistants. They are all able to contribute to the discussion on a pupils areas for focus and improvement. Artificial Intelligence (AI) data analytics can be a valued tool in allowing teachers to visualise and assess the most effective ways of learning in the classroom, the metacognition processes occurring, and intervene if needed to support learning. Beyond the classroom, education leaders and policy makers can aggregate data to develop strategies and policies. 


How to Retain Talent in Uncertain Circumstances

“There was confusion and uncertainty, which led to a willingness for those professionals in those organizations to listen to the opportunities we had,” Sasson says. “There was no visibility whatsoever, which created an environment where they were more open to hearing what else was out there.” In some cases a company may be planning downsizing after a merger, and they may be allowing that uncertainty to linger because they want some employees to voluntarily find new jobs, Sasson says. However, in other cases organizations may want to retain their valuable talent, particularly in this tight job market. Just because there’s a merger or acquisition doesn’t necessarily mean that everyone will make a stampede to the door. ... Sasson’s team asked the employees at Proofpoint why they weren’t interested in new opportunities. “From what we understand, the CEO at Proofpoint and the Thoma Bravo team -- they seemed to do an excellent job of communicating the value of the acquisition and limiting the jitters that would typically be felt by the rank and file,” Sasson said.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford