Daily Tech Digest - October 24, 2021

Artificial Intelligence Is Smart, but It Doesn’t Play Well With Others

Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges — like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning. A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical “reward” by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI aren’t programmed to follow “if/then” statements, because the possible outcomes of the human tasks they’re slated to tackle, like driving a car, are far too many to code. “Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent won’t necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data” Allen says. “The sky’s the limit in what it could, in theory, do.”


CDR: The secret cybersecurity ingredient used by defense and intelligence agencies

Employees in the defense and intelligence sector are in near-constant contact with each other, sharing information often under challenging circumstances. They move files and documents from low trust environments into networks that hold a nation’s most sensitive data, where a data breach could have a serious impact on national security. Consequently, when it comes to sharing any kind of document, these teams cannot risk threats slipping through the net. Human attackers are now using machines to engineer malware at a pace only imaginable a few years ago. Today, it’s possible to engineer a new piece of malware and to make each version of that file suitably different so that it’s almost impossible for traditional malware protection solutions to identify. In the same way that Facebook or Twitter use algorithms to create a truly unique social feed of information that is tailored to the interests and tastes of a user, bad actors can use similar algorithms to deploy essentially the same underlying threats but packaged in ways that simply evade detection.

Gartner advises tech leaders to prepare for action as quantum computing spreads

Cambridge Quantum’s efforts to expand quantum infrastructure got significant backing earlier this year when Honeywell said it would merge its own quantum computing operations with Cambridge Quantum, to form an independent company to pursue cybersecurity, drug discovery, optimization, material science, and other applications, including AI. Honeywell said it would invest between $270 million – $300 million in the new operation. Cambridge Quantum said it would remain independent, working with various quantum computing players, including IBM. The lambeq work is part of an overall AI project that is the longest-term project among the efforts at Cambridge Quantum, said Ilyas Khan, founder, and CEO of Cambridge Quantum, in an e-mail interview. “We might be pleasantly surprised in terms of timelines, but we believe that NLP is right at the heart of AI more generally and therefore something that will really come to the fore as quantum computers scale,” he said. Khan cited cybersecurity and quantum chemistry as the most advanced application areas in Cambridge Quantum’s estimation.


How to Not Lose Your Job to Low-Code Software

The amount of work you have is driven by the ability of software to make a meaningful difference in your organization. Take a look at your current queue of work. If your team is like most IT teams there will be a mountain of unmet demand for new applications or additional functionality for existing applications. Thinking that any amount of automation will reduce that demand to zero is like thinking that a faster car will get you to Mars. If low code software starts taking some of your work, there will likely be other projects you can work on. If you handle this right, you can even shuffle some of the painful projects over to the party-goers on the low code bus. ... Secondly, and more fundamentally, there are certain aspects of software engineering that are harder to automate than others - making it unsuitable terrain for the low code party bus to drive across. For example, low code tools make it easy for non-developers to create a table to store data. But they can't do much to help the non-developer structure their tables to best map to the business problem they are trying to solve. 


API contract testing with Joi

When you sign a contract, you expect both parties to hold their end of the bargain. The same can be true for testing applications. Contract testing is a way to make sure that services can communicate with each other and that the data shared between the services is consistent with a specified set of rules. In this post, I will guide you through using Joi as a library to create API contracts for services consuming an API. ... Before we get started, let me give you some background about contract testing. This kind of testing provides confidence that different services work when they are required to. Imagine that an organization has multiple payment services that utilize an Authentication API. The API logs in users into an application with a username and a password. It then assigns them an access token when the log-in operation is successful. Other services like Loans and Repayments require the Authentication API service once users are logged in. ... Contract tests are designed to monitor the state of an application and notify testers when there is an unexpected result. Contract tests are most effective when they are used by a tool that relies on the stability of other services. 


Regulating Crypto: Is It Different – Or Is It the Same?

Regulators need to know what the technology is capable of, but they need not know every technical detail just to make good law. “If you can understand clearly what the technology is doing, I think that you can make pretty good judgments about what the fundamental financial activity is and what regulatory box that financial activity can or should fit in,” he told Webster. Strip those technologies down a bit, and they boil down to some basic underpinning concepts that lend themselves to governance. At the core of blockchain and cryptos is database architecture, said Gerety. “It has some neat properties, but nowhere else in the financial services industry do you get regulated differently if you use SAP or Oracle,” he said. To get a sense of how one might approach “newness” in a sector, he offered a concept of a matrix, with axes denoting what the future “feels like” and might actually “be.” Babies will pretty much always “be” and “feel” the same. Not much in the way of technology will change the experience or feelings one will have with birthing and raising a child, despite the newness of, well, becoming a parent.


Information Theory: Principles and Apostasy

Let’s start with a data science interview question. Usually, as part of an initial screening round for entry level candidate I like to find an example on their CV of a project that used real life data. Real life data is much nastier than academic and research data. Its chalked full of missing data, mixed (integer and string) data and outliers that make consuming and modeling the information grossly more difficult. Invariably most of the conversation revolves around these real world considerations. How do you handle missing data? Usual answers involve some sort of information replacement strategy like replace them with the average value of the column. Fair and reasonable. How do we deal with malformed or mixed data? Again usually a fair answer involving mapping strings to numbers. Finally what did you do about the large outlier events? Usually the answer is that they ‘removed them’ because you ‘can’t be expected to predict rare events.’ The ultimate justification: it improved the models accuracy. That’s good answer if building a forecast is a game or contest, much worse if you want to use it.


The OCC Officially Recognizes the Critical and Permanent Role of Blockchain in Banking

This is noteworthy for a couple of reasons. First, it is a recognition that many banks, along with a slew of other financial institutions, are adopting DLT as a technology enabling better processes. Simply put, financial institutions are moving past the exploratory phase of DLT and are now actually implementing the technology into their operations. Secondly, the OCC is declaring its intent to explore and define appropriate governance processes for banks to deploy when such changes are implemented. In other words, the OCC is defining its intent to regulate how such changes should take place. ... The immutability of a distributed ledger provides a new level of security. It is challenging to establish a single customer view across different jurisdictions and business lines. With mutualized data management, DLT allows permitted parties to share data securely and in real time, which could address challenges of Know Your Customer (KYC) and Anti Money Laundering (AML). The themes are clear – DLT injected into the banking and financial ecosystem is an equalizer, a simplifier and a fortifier.


How data drives Air Canada’s cargo business

For business intelligence, the airline has been a long-term user of WebFocus from Tibco. It also uses Microsoft PowerBI. Riboulet’s reason for using two BI platforms is because “they complement each other”, each having different functions it finds useful. For example, WebFocus offers Air Canada the ability to push out reports via email, a feature not available in PowerBI. Riboulet says this is useful for people working in operations, who may only have access to their phone and need to see embedded reports. Also, the data team noticed that many business users require similar datasets and attributes, which can be pulled together into pre-built reports. The company also uses the data grid feature in WebFocus to aggregate data in a way that can easily be customised by users and can be exported to Microsoft Excel. It has also deployed WebFocus Hyperstage, as a staging area for data, to avoid direct access to its on-premise database systems. Riboulet views the data team at Air Canada Cargo as internal consultants who discuss data requirements with businesspeople. 


How Much Power Should Finance Have Over Their Automations?

If you want to automate your finance function and bring lower costs to operate the financing and accounting needs, taking control can provide you with numerous benefits. This includes prioritization of your processes that align with your strategic vision, controlling resource investments and commitments, and insuring SOX control frameworks are adhered to at the onset. It’s not surprising that some finance organizations can feel underserved by their IT partners, as ITs responsible for supporting the whole organization and finance operations can take a back seat to other priorities. This does not mean that IT should be left aside. IT will have a role, even if you run your own automation program end-to-end, and you will need them to have a seat at the table. You will want to avoid creating a shadow IT group and truly focus your financial resources on process improvement and automation. It’s best practice to leverage your IT team for infrastructure, network security, understanding ERP/system schedules, roadmaps, and disaster recovery processes (at a minimum). It is also recommended to adopt the cloud version of the tools, which can significantly reduce the needs of your IT org



Quote for the day:

"Problem-solving leaders have one thing in common: a faith that there's always a better way." -- Gerald M. Weinberg

Daily Tech Digest - October 23, 2021

How Artificial Intelligence is Changing DevOps?

AI automatic testing tools can generate tests automatically, and that too with little to no code at all, so developers don’t have to worry about writing test codes. The AI is evolved enough to automatically generate tests by learning the app flows, screens, and elements that require little to no human involvement. The automation tools are so well built and perform automated audits or checks so frequently that there are almost no instances of errors. They capture feedback at every instant and analyze the input and identify the errors in real-time. The intelligence of the tools allows the developers or the team members to reduce participation in test automation creation activities and free up their time to focus on more important and urgent tasks. And eventually, develop a more productive system for the organization. AI is proficient in handling big data with minimal human involvement. For DevOps, this means that the huge data sets can now be managed with minimal effort. Since DevOps involves and impacts three functions of an organization simultaneously, it also has tons of data to be managed and maintained on an everyday basis.


How low-code/no-code solutions and automation can triage employee turnover

We are continuing to see AI getting better but it needs to be applied in the right places. For example, teams can automate more of their processes and manual tasks, improving workflows and reducing busywork for agents. Costs are reduced, and customer demand is more easily met, which has proven to lead to happier, more productive support teams. “Solving customer problems and leaving a customer happy is what gets an agent excited about getting up every day and going into the job,” Wolverton says. “We strongly believe getting all of that repetitive work and processes automated so that they can focus on the rewarding work is what’s going to keep their motivation high and keep them in your organization.” It’s also about letting customers help themselves, she adds — more and more, customers want their answer fast, and waiting on the phone for the next available agent isn’t going to cut it. If you can get people their answer quickly and accurately through a search or through a bot, and then only escalate when the issue becomes more complex and a human is uniquely qualified to handle the issue, you’re going to have far more satisfied customers.


Non-Traditional Cybersecurity Career Paths: Entering the Industry

“I’d never considered cyber or even information technology as a career growing up. My interests always piqued around history and physics. I in fact failed first-year engineering for having written an essay on David Hume when asked to discuss induction in engineering. I have an undergraduate degree with a double major in history & philosophy of science and quantum physics. I continued down this path, working in the university’s quantum computing department on the development of quantum circuitry. My work centered on the development of superconducting diamond[s], looking to test and establish the reality of theoretical models predicting room-temperature superconductivity. I believed in making Marty McFly’s future a reality; I was on the path to making superconducting circuitry with the sci-fi application of a hoverboard — although I still don’t believe it’d be able to hover across water. “One day while taking adult skiing lessons with an instructor (now my fiancé), I realized my skillsets weren’t technically focused but operational. I’d spent my theses developing, constructing and rebuilding processes.


Agile talent: How to revamp your people model to enable value through agility

When you cut through it, making the move to agile means you’re really going to be breaking the company down into self-sufficient, multidisciplinary, multidimensional teams. That’s the very essence of agile. However, it’s not all about structure. There are many barriers that must be removed to allow those teams to really work. Some barriers you don’t quite realize are there, and many other barriers don’t appear as barriers today but do appear as barriers going forward. So if you do move the organization to agile, be prepared to drive through a number of the barriers. Because you only really get the true benefit that lies in agile if you’re prepared to put those to the stake. I have talked to many organizations interested in the transition to agile, and in the early conversations the focus is understandably always on the organization’s structure. Having “seen the movie,” and helped many companies in the making of their movie, if I had $100 to spend on agile, I’d put only $10 to $15 against organizational structure. All of the rest I would invest in agile ceremonies and processes, particularly in the people processes.


What Are Low-Code/No-Code Platforms?

Low-code/no-code platforms and capabilities are now being provided by a wide range of providers including startups trying to fill various niches in the technology all the way up to the large enterprise products and services companies. We have covered the low-code/no-code options that are available with Microsoft, Google and Amazon previously. While there is plenty of crossover ability to connect to the other companies’ products and services, Amazon is the only one that lacks any ability to tie into data that might be hosted on the other two low-code/no-code platforms. Choosing a low-code/no-code platform will likely be impacted by where an organization has its data located. Just like other services offered by these big three companies, it is much easier to work within the same ecosystem rather than mixing and matching across low-code/no-code tools. Once that decision is made, the work of building out those first low-code tools for an organization should be fairly straightforward. Low-code/no-code development intentionally targets knowledge workers who have familiarity with the processes and workflows within their business unit, department or division but do not necessarily have any coding experience.


PostgreSQL v14 Is Faster, and Friendly to Developers

This release also brings more features to parallel query execution, in which PostgreSQL can devise query plans that can leverage multiple CPUs to answer queries faster. Now your database can execute queries in parallel for RETURN QUERY and REFRESH MATERIALIZED VIEW. More prominent updates include pipeline mode for LibPQ, which is the interface that developers use to connect their application to the database. With PostgreSQL, they now have the ability to use a pipeline mode. LibPQ used to be single-threaded, where it would wait for one query to complete execution before sending the next one to the database. Now devs can feed multi-transactions into the pipeline and LibPQ will execute them turn by turn to feedback results into the application. The application no longer has to wait for the first transaction to complete to execute the next one. This was one of the updates in which Shahid commented, “Why did we not think about this earlier? This is such a no-brainer! But that’s how technology progresses.” Another potential no-brainer-in-hindsight is an upgrade to TOAST, which now allows for LZ4 compression. TOAST is a system that allows the storage of much larger data. 


Encouraging STEM uptake: why plugging the skills gap starts at school

Part of the challenge for businesses has been that leaders and recruiters still use assumptions about the value of certain backgrounds and degrees as the basis for their hiring strategy. This has been a particular issue in the technology industry where a formal ‘technical background’ has long been viewed as a minimum requirement to get on the career ladder. In some forward-thinking companies, however, there is more value now being placed on soft skills, such as creativity, persuasion and collaboration. These companies also recognise that employees can build specialist technical skills via routes such as internships, apprenticeships or on the job training. To play a full role in building the STEM workforce, businesses should also offer wider support to organisations that are working to ensure equal opportunities for girls and women. Code First Girls is one of a growing number of organisations that support young adult and working age women, in their case, “to become kick-ass developers and future leaders.” Businesses that are committed to equality of opportunity in their technical teams can help promote inclusion and tap into the female talent pool by working with these like-minded organisations.


Simplifying the complex: Introducing Privacy Management for Microsoft 365

Staying ahead of data privacy regulations and understanding the technical actions you can take to address compliance can be daunting. To help, Microsoft Compliance Manager today has more than 200 regulatory assessment templates covering global, industrial, and regional Data Protection and Privacy regulations, making it easier for customers to interpret, assess, and improve their compliance with regulatory requirements. We recently added three privacy-specific assessments for Colorado Privacy Act, Virginia Consumer Data Protection Act (CDPA), and Egypt Privacy Law. Additionally, we have mapped privacy-specific controls across these assessment templates to the new Privacy Management solution to help you scale your compliance efforts. You can learn more about Compliance Manager, our list of available assessments, and how to use the assessment in our documentation. You can also try the Compliance Manager 90-day trial, which gives you access to 25 assessments. Privacy is a journey 


Remote and hybrid work: 4 tips to ease onboarding

By their nature, hybrid or remote office environments encourage asynchronous collaboration, as not everyone will be online or in the office at the same time. To make asynchronous workflows more manageable, consider the following tips: Minimize context switching by muting unnecessary communication channels, not feeling the need to respond immediately, and using messaging apps like Slack asynchronously; Set up Slack channels for different languages so people can easily communicate with one another on their own time (this is particularly helpful if you’re working with developers from around the world); Use project management tools, such as Jira, which allow everyone to provide input into projects on their own time. These tools also help reduce Zoom fatigue while giving team members the chance to complete tasks irrespective of their time zones. Working in a remote or hybrid environment can be challenging for many teams. But these recommendations can help you reap significant benefits. You’ll have a chance to attract, retain, and get the most out of other talented developers and IT managers with unique perspectives and different backgrounds – and that will help everyone succeed.


Promoting Creativity in Software Development with the Kaizen Method

The Kaizen method creates continuous improvements by implementing constant positive changes. Over time, these small, gradual improvements can produce significant results. It has long been a key principle of lean manufacturing methods. In English, the word "kaizen" means change for the better (kai = change, zen = good). The philosophy was first introduced at Toyota in Japan after World War II. The car manufacturer formed quality circles — groups of workers who perform similar tasks — in its production process. The teams met regularly to identify and review work-related problems, analyze the situation, and offer improvement suggestions. ... By applying the Kaizen proactive model, SenecaGlobal recently initiated an innovative process to improve the billing rate for a key client by implementing agile methodologies and conducting regular risk assessments for delivery timelines. As part of discovery, the developers uncovered a way to eliminate the need for a third-party software solution to decrypt/encrypt credit card payments, which resulted in significant cost savings. 



Quote for the day:

"One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man." -- Elbert Hubbard

Daily Tech Digest - October 21, 2021

7 secrets of successful vendor negotiation

Intentionally withholding critical information is also a terrible tactic. “Vendors and prospects do this all the time, and it never works,” Plato notes. For example: not having the funds necessary to acquire and deploy a technology and expecting the vendor to somehow provide a solution. “It’s unfair to waste a salesperson’s time if you’re not ready to purchase,” Plato states. The reverse is also true for vendors, he notes. “Don’t tell a customer you can meet their expectations when you cannot,” IT negotiations aren’t all that much different from any other type of business bargaining, observes Dmitry Bagrov, managing director of software development firm DataArt UK. “All negotiations rely on basic principles that are universal, and one of the most basic and most often forgotten is that the contract should be profitable for both sides.” Squeezing a vendor for an unprofitable rate or any other unrealistic consideration will only result in an unhappy partner that may then look to increase its margin by supplying inflated estimates, inferior resources, and other types of corner-cutting. Bagrov cautions IT leaders not to fall for the old Hollywood bromide: “It’s not personal; it’s business.” 


New Microsoft Sysmon report in VirusTotal improves security

Whether you’re an IT professional or a developer, you’re probably already using Microsoft Sysinternals utilities to help you manage, troubleshoot, and diagnose your Windows systems and applications. The powerful logging capabilities of Sysinternals utilities became indispensable for defenders as well, enabling security analytics and advanced detections. The System Monitor (Sysmon) utility, which records detailed information on the system’s activities in the Windows event log, is often used by security products to identify malicious activity. The new behavior report in VirusTotal includes extraction of Microsoft Sysmon logs for Windows executables (EXE) on Windows 10, with very low latency, and with Windows 11 on the roadmap. This is the latest milestone in the long history of collaboration between Microsoft and VirusTotal. Microsoft 365 Defender uses VirusTotal reports as an accurate threat intelligence source, and VirusTotal uses detections from Microsoft Defender Antivirus as a primary source of detection in its arsenal. Microsoft Sysinternals Autoruns, Process Explorer, and Sigcheck tools integrate VirusTotal reports, and VirusTotal itself uses Sigcheck to report details on Windows portable executable files.


Top tips for growth and success as a developer

The niche role of developers and the specialisation of their skillsets can often lead to isolation. Individuals may not necessarily collaborate with others on the same project, leaving them unaware of how the whole project was completed from start to finish. In contrast, a more collaborative approach, where individuals are encouraged to share ideas and actively work together on tasks can have a multitude of benefits. Not only does it provide a greater understanding of the project management aspect of developer projects, but it allows developers to gain insight, through the expertise of others, into code they may never have written before. ... While skilling up on new technologies is always good, developing your “soft” skills is equally important for your future career prospects. Open source gives you the chance to progress a range of these skills, such as communication, teamwork, and problem-solving. Even the most skilled developers can benefit from open source, where they can learn new skills and form important peer networks.


Database Testing Made Simple, Efficient and Fast

If you involve a database in your Java test suite, make sure it’s a containerized one. The Testcontainers framework takes care of the simplicity requirement. It adds the much-needed abstraction layer around Docker to provision, start and tear down a container of your database during the test suite lifecycle. And it does it with minimum boiler plate, keeping your tests readable. ... An efficient suite of tests does not target the same functionality twice. However, to some degree it’s unavoidable that generic code is called multiple times. Imagine a simple query to fetch a user record. This will be invoked in multiple test scenarios. Throughout the entire test run it may be called fifty times whereas its functionality needs to be validated only once. This is wasteful. Imagine a test that validates the unhappy paths in the snippet below. We want to catch the proper exceptions for an unknown member, unknown movie, user too young and maximum number of rentals exceeded. Every subsequent scenario repeats more queries until it throws its expected exception. 


How to right-size edge storage

Edge data centers are generally small-scale facilities that have the same components as traditional data centers but are squeezed into a much smaller footprint. In terms of capacity, determining edge storage requirements is similar to estimating the storage needs of a traditional data center, however workloads can be difficult to predict, says Jason Shepherd, a vice president at distributed edge-computing startup Zededa. Edge-computing adopters also need to be aware of the cost of upgrading or expanding storage resources, which can be substantial given size and speed constraints. "This will change a bit over time as grid-based edge storage solutions evolve, because there will be more leeway to elastically scale storage in the field by pooling resources across discrete devices," Shepherd predicts. A more recent option for expanding edge-storage capacity independently from edge-compute capacity are computational storage-drive devices that feature transparent compression. They provide compute services within the storage system while not requiring any modifications to the existing storage I/O software stack or I/O interface protocols, such as NVMe or SATA.


Smartphone counterespionage for travelers

If you’re deemed a target worthy of espionage, the IMSI catcher may even be used to install malware on your device. Such malware can take complete control of your phone, granting spies access to the contents on it, the communications from it and even its cameras and microphones. IMSI catchers have been detected at airports throughout the world, including in the United States. But really, they can be located anywhere, including at chokepoints like train stations and shopping centers as well as in the vicinity of hotels typically frequented by foreign travelers. If you’re lucky enough to avoid an IMSI catcher, you can still be monitored by local intelligence through the cell network alone. This is especially true in countries where the cellular infrastructure is state-owned. At the very least, spies will have access to your real-time location and the metadata of your calls. As with IMSI catchers, the cell network can also be used to deliver malware to your device, typically through a malicious carrier update that happens behind the scenes. The end result is that if you’re traveling to a foreign country, especially one that’s hostile to your home country or known to engage in economic espionage, you have to assume that your smartphone will be compromised at some point.


DevOps: 3 skills needed to support its future in the enterprise

While the future looks promising for DevOps experts, much will depend on how DevOps engineers are leveraged to transform how work gets done. For instance, DevOps engineers must continually strive to break down silos while also moving away from traditional development, deployment, and waterfall builds that inhibit the velocity of scalable, qualitative, and reliable software. In a pandemic and post-pandemic world, organizations are modifying their operating plans and must deal with a distributed workforce. IT teams must also consider automation and unbundling previously existing complexities such as siloed development and operations teams. Everything-as-code, hybrid cloud operating models, and automated workflows will be top priorities for every DevOps team. Digital services must excel across all organizational functions in order to delight customers. Meanwhile, organizations will continue to focus on how to increase revenue while reducing costs. Experience, processes, effectiveness, utilization, quality, and speed are the levers for improvement.


CISA Leader Backs 24-Hour Timeline for Incident Reporting

Wales' support for a 24-hour timeline aligns with the Senate Select Intelligence Committee's Cyber Incident Notification Act of 2021 - sponsored by Sens. Mark Warner, D-Va., Marco Rubio, R-Fla., and Susan Collins, R-Maine. The bill would require federal agencies, federal contractors and organizations that are considered critical to U.S. national security to report security incidents to CISA within 24 hours of discovery. Per the bill, companies that do not report an incident within 24 hours could face a maximum financial penalty equal to 0.5% of the previous year's gross revenue. The measure, however, allows for exceptions to the penalty. Another provision would allow organizations to anonymize personal data when they report a breach - to encourage victims to report incidents without revealing sensitive data. Some cybersecurity experts have said that it's unrealistic to expect organizations to report incidents within 24 hours of discovery because they need more time to properly assess an attack and determine if it meets the criteria for notification.


The best approach to AI assistants and process automation for your business

For firms to harness the full potential of AI assistants and process automation, an effective approach is to consider how closely the two are intertwined. We’ve seen from experience that one of the most effective and logical methods of implementing AI and automation is to introduce digital assistants into their existing customer services, where they can be used to capture and create a log of conversations. Presently, many companies’ customer services are constrained by the availability of their employees to man phonelines or speak to customers in person, which can be a challenge outside of normal working hours. Digital assistants help to remove the customer services gap by offering a 24/7 solution with which consumers can share their questions and issues whenever they need to, safe in the knowledge that the enquiry will be logged and prioritised accordingly. This is not to suggest that digital assistants should be viewed as a replacement to human engagement with customers – a survey conducted by Dutch tech firm Usabilla found that 55% of people still like to speak with a human customer service agent on the phone.


Takeoff: What Software Development Can Learn from Aviation

As with pilots practicing how to react to an engine outage, we regularly practice how to react to a database outage. Once a month two of our engineers are randomly selected to run a database outage drill. We present them with the scenario that one of the databases on our staging system has crashed and needs to be restored from a backup. In this scenario they are the only people available and need to get the database up and running as soon as possible. We learned pretty quickly that these drills are enormously helpful. They give our people the confidence that if something like this actually happens, they won’t have to guess (or find some documentation on) what the next move could be, but can rely on their experience. It also greatly improved our documentation and tooling which apart from being helpful in an emergency, has given us a better overview of our system landscape. We can already see that when performing the drill for the second or third time, our engineers are a lot more relaxed. They know what to do and what to expect.



Quote for the day:

"When leaders are worthy of respect, the people are willing to work for them. When their virtue is worthy of admiration, their authority can be established." -- Huananzi

Daily Tech Digest - October 20, 2021

The challenges of cloud data management

IT departments are facing a growing challenge to stay abreast of advancements in cloud technologies, provide day-to-day support for increasingly complex systems, and adhere to ever-changing regulatory requirements. In addition, they must ensure the systems they support are able to scale to meet performance objectives and are secured against unauthorized access. ... Much like data security, adhering to regulatory compliance frameworks is a shared responsibility between the customer and cloud provider. Larger cloud vendors will provide third-party auditor compliance reports and attestations for the regulatory frameworks they support. It will be up to each organization to read the documentation and ensure the contents meet specific compliance needs. Most leading platforms will also provide tools to help clients configure identity and access management, secure and monitor their data, and implement audit trails. But the responsibility for ensuring the tools' configuration and usage meet the framework's control objectives relies solely with the customer. ... We know one of IT's core responsibilities is to transform raw data into actionable insights.


Learning to learn: will machines acquire knowledge as naturally as children do?

We create new-to-the-world machines, with sophisticated specifications, that are hugely capable. But to reach their potential, we have to expose them to hundreds of thousands of training examples for every single task. They just don’t ‘get’ things like humans do. One way to get machines to learn more naturally, is to help them to learn from limited data. We can use generative adversarial networks (GANs) to create new examples from a small core of training data rather than having to capture every situation in the real world. It is ‘adversarial’ because one neural network is pitted against another to generate new synthetic data. Then there’s synthetic data rendering – using gaming engines or computer graphics to render new scenarios. Finally, there are algorithmic techniques such as Domain Adaption which involves transferable knowledge (using data in summer that you have collected in winter, for example) or Few Shot Learning, which making predictions from a limited number of samples. Taking a different limited-data route is multi-task Learning, where commonalities and differences are exploited to solve multiple tasks simultaneously.


IT hiring: 5 signs of a continuous learner

Whatever you call it, it’s an important attribute to consider when hiring or grooming the most capable IT professionals today. A continuous learner can offer more bang for the buck in one of the strongest job markets in recent years. “We have found that many companies, while their job descriptions state they are looking for a certain number of years of experience in a laundry list of technologies, are being more flexible and hiring candidates that may be more junior, or those who lack a few main technologies,” Spathis says, noting that many organizations are willing to take the risk on more junior or less specifically experienced candidates who are eager, trainable, and able to learn new skills. There’s definite agreement on the demand for continuous learners in the IT function today. “To thrive during these changing times, it’s imperative that IT organizations continuously grow and change with changing needs,” says Dr. Sunni Lampasso, executive coach and founder of Shaping Success. “As a result, IT organizations that employ continuous learners are better equipped to navigate the changing work world and meet changing demands.”


Ethical and Productivity Implications of Intelligent Code Creation

AI technology is changing the working process of software engineers and test engineers. It is promoting productivity, quality, and speed. Businesses use AI algorithms to improve everything from project planning and estimation to quality testing and the user experience. Application development continues to evolve in its sophistication, while the business increasingly expects solutions to be delivered faster than ever. Most of the time, organizations have to deal with challenging problems like errors, defects, and other complexities while developing complex software. Development and Testing teams no longer have the luxury of time when monthly product launches were the gold standard. Instead, today’s enterprises demand weekly releases and updates that trickle in even more frequently. This is where self-coded applications come into play. Applications that generate the code themselves help the programmers accomplish a task in less time and increase their programming ability. Artificial intelligence is the result of coding, but now coding is the result of Artificial intelligence. It is now helping almost every sector of the business and coders to enhance the software development process. 


How To Transition From Data Analyst To Data Scientist

Before even thinking about making the transition, one has to be very clear about what a data scientist does and introspect what has to be done to fill the gaps that are needed to make the transition and the skills the person has now. A data scientist not only handles data but provides much deeper insights from it. Other than gaining the right mathematical and statistical know-how, training yourself to look at business problems with the mindset of a data scientist and not just like a data analyst will be of great help. This means that while looking into a problem, developing your critical thinking and analytical skills, getting deep into the problem to be solved at hand, and coming up with the right way to approach the solution will train you for the future. A data analyst might not have great coding skills but surely has to know it well. Data scientists use tools like R and Python to derive interpretations from the massive data sets they handle. As a data analyst, if you are not great at coding or don’t know the common tools, it would be wise to start taking basic courses on them and use them then in real-world applications.


Application Security Manager: Developer or Security Officer?

First, an ASM has to understand what a supervised project is about. This is especially important for agile development, where, unlike the waterfall model, you don’t have two months to perform a pre-release review. An АSМ’s job is to make sure that the requirements set at the design stage are correctly interpreted by the team, properly adopted in the architecture, are generally feasible, and will not cause serious technical problems in the future. Typically, the ASM is the main person who reads, interprets, and assesses automated reports and third-party audits. ... Second, an ASM should know about various domains, including development processes and information security principles. Hard skills are also important because it’s very difficult to assess the results provided by narrow specialists and automated tools if you can’t read the code and don’t understand how vulnerabilities can be exploited. When a code analysis or penetration test reveals a critical vulnerability, it’s quite common for developers (who are also committed to creating a secure system) to not accept the results and claim that auditors failed to exploit the vulnerability. 


Top Open Source Security Tools

WhiteSource detects all vulnerable open source components, including transitive dependencies, in more than 200 programming languages. It matches reported vulnerabilities to the open source libraries in code, reducing the number of alerts. With more than 270 million open source components and 13 billion files, its vulnerability database continuously monitors multiple resources and a wide range of security advisories and issue trackers. WhiteSource is also a CVE Numbering Authority, which allows it to responsibly disclose new security vulnerabilities found through its own research. ... Black Duck software composition analysis (SCA) by Synopsys helps teams manage the security, quality, and license compliance risks that come from the use of open source and third-party code in applications and containers. It integrates with build tools like Maven and Gradle to track declared and transitive open source dependencies in applications’ built-in languages like Java and C#. It maps string, file, and directory information to the Black Duck KnowledgeBase to identify open source and third-party components in applications built using languages like C and C++. 


Why You Don't Need to Be a Business Insider in Order to Succeed

No matter what anyone tells you, it’s not a zero-sum game. There is abundance out there for everyone. Of course, money becomes concentrated with various people, but wealth-mobility is very real and happening all the time. We hear people talk about the 1% all the time (often in an effort to paint them as a monolithic, evil, controlling class). What they fail to recognize is that people are constantly moving in and out of the 1% all the time. Some of this is down to inherited wealth, and some is down to hard work — but it’s happening all the time. What really lies at the heart of this is fear. We abdicate our power to an imagined ruling class because we’re afraid of the unknown. And before you think this is about blaming you: It is our subconscious being unwilling to take the risk that stops us. You have a built-in stowaway in your mind who wants to maintain a status quo. Therefore, any new growth opportunities — while intellectually exciting and appealing — will be met with emotional resistance at some point. I’m sure you’ve had this happen to you before: You get a new career-changing offer, you do a little dance and head off to celebrate. 


Why a new approach to eDiscovery is needed to decrease corporate risk

For businesses, the combination of these factors has led to a big increase in corporate risk, putting significant pressure on any corporate investigations that need to be conducted and making the eDiscovery process much more difficult. Not only are employees and their devices a lot less accessible than they used to be, but the growing use of personal devices, many of which lack proper security protocols or use unsecured networks, leaves company data much more vulnerable to theft or loss. If that wasn’t enough, heightened privacy concerns and the likelihood that personal data will be unintentionally swept up in any eDiscovery processes can make employees even more reluctant to hand over their devices to investigators if/when needed (if investigators can even get hold of them). As a result, many companies are suddenly finding themselves between a rock and a hard place. How can they operate a more employee friendly hybrid working model while still maintaining the ability to carry out corporate investigations and eDiscovery in the event it’s required?


Three key areas CIOs should focus on to generate value

CIOs and IT executives should focus on three types of partner connections: one-to-one, one-to-many and many-to-many. A one-to-one connection can be taken to the next level and become a generative partnership where the enterprise and technology partner work together to create and build a solution that doesn’t currently exist. The resulting assets are co-owned and produce benefits and revenue for both partners. Generative partnerships are becoming more common. In fact, Gartner forecasts that generative-based IT spending will grow at 31% over the next five years. Beyond one-to-one connections is the formation of ecosystems of multiple partners. One-to-many partnerships work best when a single enterprise needs to focus many players on jointly solving a single problem – such as a city bringing together public and private entities to serve the citizen. Many-to-many partnerships are created when a platform brings many different enterprises’ products and services together, to be offered to many different customers. Often called platform business models, these marketplaces and app/API stores enable the many to help the many at ecosystem scale.



Quote for the day:

"Leaders are people who believe so passionately that they can seduce other people into sharing their dream." -- Warren G. Bennis

Daily Tech Digest - October 19, 2021

Micro Frontend Architecture

The idea behind Micro Frontends is to think about a web app as a composition of features that are owned by independent teams. Each team has a distinct area of business it cares about and specializes in. A team is cross-functional and develops its features end-to-end, from database to user interface. ... But why do we need micro frontends? Let’s find out. In the Modern Era, with new web apps, the front end is becoming bigger and bigger, and the back end is getting less important. Most of the code is the Micro Frontend Architecture and the Monolith approach doesn’t work for a larger web application. There needs to be a tool for breaking it up into smaller modules that act independently. The solution to the problem is the Micro frontend. ... It heavily depends on your business case, whether you should or should not use micro frontends. If you have a small project and team, micro frontend architecture is not as such required. At the same time, large projects with distributed teams and a large number of requests benefit a lot from building micro frontend applications. That is why today, micro frontend architecture is widely used by many large companies, and that is why you should opt for it too.


CodeSee Helps Developers ‘Understand the Codebase’

As a developer, you’ve likely faced one problem again and again throughout your career: struggling to understand a new codebase. Whether it’s a lack of documentation, or simply poorly-written and confusing code, working to understand a codebase can take a lot of time and effort, but CodeSee aims to help developers not only gain an initial understanding, but to continually understand large codebases as they evolve over time. “We really are trying to help developers to master the understanding of codebases. We do that by visualizing their code, because we think that a picture is really worth a thousand words, a thousand lines of code,” said CodeSee CEO and co-founder Shanea Leven. “What we’re trying to do is really ensure that developers, with all of the code that we have to manage out there — and our codebases have grown exponentially over the past decade — that we can deeply understand how our code works in an instant.” Earlier this month, CodeSee, which is still in beta, launched OSS Port to bring its code visibility and “continuous understanding” product to open source projects, as well as give potential contributors and maintainers a way to find their next project.


Non-Coder to Data Scientist! 5 Inspiring Stories and Valuable Lessons

While looking for inspiring journeys I focus on people coming from a non-traditional background. People coming from non-technology backgrounds. People having zero coding experience. I guess this makes their story inspiring. All those who found their success in data science were willing to learn to code. They were not intimidated by the Kaggle notebooks that they were not able to understand initially. They all understood that it takes time to gain knowledge and pursued till they acquired all the required knowledge. Programming is one of the biggest show stoppers. It is this particular skill that makes many frustrated. It even makes them give up their passion for a career in data science. Programming is not exactly a hard thing to learn. ... Having a growth mindset plays a major role in data science. There are many topics to learn and it can be overwhelming. Instead of saying, I can’t learn math, I can’t be a good programmer, I can never understand statistics. People with a growth mindset tend to stay positive and keep trying. 


How To Stay Ahead of the Competition as an Average Programmer

Apart from getting the satisfaction of being helpful, it has multiple career benefits too. One, I get to learn a lot more by helping others. Two, continuously helping others builds trusted relationships within the organization.In the software industry, your allies come to your help more than you realize. They can return the favor during application integration, defect resolution, challenging meetings, or even in promotion discussions. If you know people and have helped them before, they will be happy to bail you out from difficult situations. Hence, never hesitate to help others at your workplace. ... Simultaneously, it might not be possible for you to be of help to everyone. But you can justify why you are unable to help. Being arrogant or repeatedly rejecting the requests as not your responsibility makes others think you are not a team player. ... While working in a team environment, you are bound to face challenges. You need to follow company policies and processes that you might find hindering your productivity. You will have to work with people who slow down the team’s progress due to their poor contribution.


A real-world introduction to event-driven architecture

An event-driven architecture eliminates the need for a consumer to poll for updates; it instead receives notifications whenever an event of interest occurs. Decoupling the event producer and consumer components is also advantageous to scalability because it separates the communication logic and business logic. A publisher can avoid bottlenecks and remain unaffected if its subscribers go offline, or if their consumption slows down. If any subscriber has trouble keeping up with the rate of events, the event stream records them for future retrieval. The publisher can continue to pump out notifications without throughput limitations and with high resilience to failure. Using a broker means that a publisher does not know its subscribers and is unaffected if the number of interested parties scales up. Publishing to the broker offers the event producer the opportunity to deliver notifications to a range of consumers across different devices and platforms. Estimates suggest that 30% of all global data consumed by 2025 will result from information exchange in realtime.


Pros and cons of cloud infrastructure types and strategies

A multi-cloud strategy simply means that an organisation has chosen to use multiple public cloud providers to host their environments. A hybrid cloud approach means that a company is using a combination of on-premises infrastructure, private cloud and public cloud — and possibly more than one of the latter, meaning that company would be implementing a multi-cloud strategy with a hybrid approach. At times, these terms are used interchangeably. Companies choose a multi-cloud strategy for a multitude of reasons, not least of which is avoiding vendor lock-in. Spreading workloads across multiple cloud providers increases reliability, as a company is able to fail over to a secondary provider if another provider experiences an outage. Optionality is a huge benefit to companies who want to be able to pick and choose which services will most seamlessly integrate into their environments, as each major public cloud provider provides some unique services for different types of workloads. Furthermore, when a company uses multiple public cloud providers, it retains flexibility and can transfer workloads from one provider to another.


Gartner: Top strategic technology trends for 2022

The first of those trends is the growth of the distributed enterprise. Driven by the massive growth in remote and hybrid working patterns, traditional office-centric organizations are evolving into geographically distributed enterprises. “For every organization, from retail to education, their delivery model has to be reconfigured to embrace distributed services,” Groombridge said. Such operations will stress the network that supports users and consumers alike, and businesses will need to rearchitect and redesign to handle it. ... “Data is widely scattered in many organization and some of that valuable data can be trapped in siloes,” Groombridge said. “Data fabrics can provide integration and interconnectivity between multiple silos to unlock those resources.” Groombridge added that data-fabric deployments will also force significant network-topology readjustments and in some cases, to work effectively, could require their own edge-networking capabilities . The result is that the fabric will unlock data that can be used by AI and analytics platforms to support new applications bring about business innovations more quickly, Groombridge said. 


BlackMatter Ransomware Defense: Just-In-Time Admin Access

To be fair, the BlackMatter alert, beyond including intrusion system rules, also details the group's known tactics, techniques and procedures, and includes additional recommended defenses, such as implementing "time-based access for accounts set at the admin-level and higher," due to ransomware-wielding attackers' propensity to attack organizations after hours, over weekends, on Christmas Eve or any other inconvenient time. What does time-based access look like? One approach is just-in-time access, which enforces least-privileged access except for temporarily granting higher access levels via Active Directory. "This is a process where a network-wide policy is set in place to automatically disable admin accounts at the AD level when the account is not in direct need," according to the advisory. "When the account is needed, individual users submit their requests through an automated process that enables access to a system, but only for a set timeframe to support task completion."


Why is collaboration between the CISO and the C-suite so hard to achieve?

Poor communication between the CISO and business unit heads is a major barrier to safe and successful business transformation. To properly educate people within the organisation about the realities of a cyber attack, the CISO must move beyond data, buzzwords and technical jargon and tell a story that brings the threat to life for those without subject-matter insight. If the CISO can intelligibly and clearly articulate the threats and the steps necessary to mitigate them, they are much more likely to capture executives’ attention and help ensure that all key stakeholders understand the trade-offs between new technology and added risk. If they’re able to adapt their language to specific individuals and business functions, they’ll have even greater success. For instance, a chief marketing officer is most likely interested in the risks to customer data, while chief financial officers will want to better understand how to secure banking information. ... “CISOs still have more work to do in breaking down the communication barriers by talking in less technical language for boards to better understand potential business risks.”


Developer Learning isn’t just Important, it’s Imperative

Every company that isn’t consistently upgrading its codebase or shifting to new frameworks is facing a serious business problem. If your codebase is getting older and older, you face the risk of massive future migrations. And if you’re not moving to new framework versions, you’re missing important benefits that your team could otherwise leverage. Technical debt naturally increases over time. The longer it goes unaddressed, the sooner you’ll get stuck paying high costs in migration, hiring, or massive upskilling efforts that take weeks or months. Like saving for retirement, incremental upskilling pays dividends in the long run. Every industry leader I’ve talked to worries about the scarcity of high-quality software engineers. That means companies feel serious pressure to constantly hire new, better developers. But rather than looking externally for a solution, what if companies looked internally Here’s the reality: meaningful developer learning helps companies convert silver medalists into gold medalists.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - October 18, 2021

Magnanimous machines: Why AI work should work for people and not the other way around

The consolidation of power amongst a Big Tech elite fused with state intelligence grows ever stronger. These entities can know everything about us, yet carefully hide their own clandestine obfuscated activities. The best defence to this asymmetry is radical, mandated, and cryptographically secure transparency. To illustrate, in commercial aircraft, we have two essential data recorders (‘black boxes’). One monitors the aircraft itself, and the other monitors cockpit chatter. Both recorders are necessary to understand why an incident has occurred. For the same reasons, we need a similar approach to humane technology. Transparency is the foundation upon which every other aspect of ethical technology rests. It is essential to understand a system, its functions, as well as attributes of the organisation, and not to forget, those who steer it. Through transparency, we can understand that the incentive structures within organisations are aligned towards producing honest good faith outcomes. We can understand what may have gone wrong and how to fix it in the future.


8 Keys to Failproof App Modernization

Typically, modernization initiatives are strategized or rolled out before major events or milestones like data center contracts and vendor contracts coming up for renewal, Software and hardware platforms going End of Service and Support Life, government-imposed deadlines to implement regulatory and compliance requirements, ageing workforce and the risk of shortage of skills. In all such scenarios, since the accumulated technical debt is so high, these become multi-year, multi-million modernization programs. Risks are equally higher in such large programs. And to optimize costs and minimize risks, the temptation sometimes is to somehow get these workloads to the target platform [containerize or rehost without really changing the underlying architecture]. This will result in more technical debt and will necessitate another modernization initiative in a few years and so it goes. The chances of success are much higher if the initiatives are incremental in nature and time-bound say 3-6 months. In fact, it is a recommended practice in agile development practices to pay down technical debt regularly every single sprint.


Three key issues to tackle before smart cities become a reality

Many smart systems require data to be validated and assimilated in real-time for it to be relevant. This poses a problem, in that it requires every citizen to agree to their data being collected and shared, which in turn requires trust. That means the collection of data and its use to influence critical decisions in smart cities, needs careful consideration. However, citizens often worry about being ‘tracked’ – a difficult perception to eradicate in a world where privacy and security are among the biggest challenges each of us faces. Overcoming it requires us to build a comprehensive data privacy and security strategy into any smart city development, with local governments then responsible for educating individuals and society on how their data will be stored, who has access, and how it can be used. Such strategies require careful consideration, as any mistakes that harm public trust could impact the success of smart cities. The NHS COVID-19 app is a good example of this – once people lacked trust in the application, it took only a matter of days for thousands of people to delete it. 


Engineering Digital Transformation for Continuous Improvement

Getting organizations to invest in improvements and embrace new ways of working is a challenge. They don’t just need the right technical solutions, they also need to address the organizational change management challenges that are creating resistance to new ways of working. Organizations frequently have champions that have ideas for improvement and are trying to influence change without a lot of success. These champions find that the harder they push for change, the more people resist. We can have all the best approaches in the world, but if we can’t figure out how to overcome this resistance, organizations will never adopt them and realize the benefits. While pushing for change is the natural approach, research by organizational change management experts, like Professor Jonah Bergerin his book “The Catalyst,” suggests this is the wrong approach. His research shows that the harder you push for change, the more people resist. Whenever they feel like they are trying to be influenced, their anti-persuasion radar kicks in and instead they start shooting down ideas and resisting the change being offered.


A transactional approach to power

As Battilana and Casciaro tell it, it’s not your personal or positional power that determines your effectiveness in any given situation. It is your ability to understand what resources the involved parties want and how the resources are distributed—that is, the balance of power. “We find this extremely compelling,” explains Casciaro, “because it brings power relationships—whether they are interpersonal, intergroup, interorganizational, or international—down to four simple factors.” Taking this a step further, the ability to shift the balance of power within a situation determines your success at exercising power. Battilana and Casciaro find there are several key strategies that support this ability to rebalance power. If you have resources the other party values, attraction is a key strategy. You try to increase the value of those resources for the other party. Personal and corporate brand-building are organized around this strategy. If the other party has too many paths to access your resources, consolidation is a key strategy. You try to eliminate or otherwise lessen the alternatives. Employees join unions to limit the alternatives of employers and increase their power.


Treasury Dept. to Crypto Companies: Comply with Sanctions

The announcement is the latest in a series of moves from the Biden administration to combat ransomware, following high-profile attacks this year that have disrupted the East Coast's fuel supply during the Colonial Pipeline incident; jeopardized the nation's meat supply by attacking JBS USA; and knocking some 1,500 downstream organizations offline by zeroing in on managed service provider, Kaseya, over the July Fourth holiday. Last month, the Treasury Department blacklisted Russia-based cryptocurrency exchange, Suex, for allegedly laundering tens of millions of dollars for ransomware operators, scammers and darknet markets. In its latest issuance, the department alleges that over 40% of Suex’s transaction history had been associated with illicit actors, involving the proceeds from at least eight ransomware variants. Similarly, this week, the White House National Security Council facilitated a 30-nation, two-day "counter-ransomware" event, which found senior officials strategizing on ways to improve network resiliency, addressing illicit cryptocurrency usage, and ways to heighten law enforcement collaboration and diplomacy. 


DevSecOps: 11 questions to ask about your security strategy now

Where does friction exist between security and business goals? The question is relatively self-explanatory: DevSecOps exists in part to remove friction and bottlenecks that have historically introduced risks rather than reduce them. The question also has a subtext: What are we doing about it? This friction often goes unaddressed because, well, it’s unaddressed – as in, people avoid pointing it out or talking about it, whether because of poor relationships, fear factors, cultural acceptance, or other reasons. Leaders need to take an active role here by showing their willingness to talk about it, without finger-pointing or other toxic behaviors. “Leaders should constantly be probing and trying to understand the friction points between the business and DevSecOps,” says Jerry Gamblin, director of security research at Kenna Security, now part of Cisco. “These often uncomfortable conversations will help you refocus your team’s goals on the company’s goals.” A willingness to have those uncomfortable conversations as a pathway to positive long-term change is a key characteristic of a healthy culture.


The importance of crisis management in the age of ransomware

How to prepare for ransomware attacks is an often-asked question. From my point of view, the best action is to go through the checklist of security controls that prevent hackers from taking control of your network. Organizations like Servadus offer a Ransomware Readiness Assessment which helps organizational leadership identify current risks to the corporation. Of course, having up-to-date incident response and business continuity plans are part of that assessment. Outside, the real value comes from remediating weak cybersecurity controls. Additionally, organizations implement a framework to shore security control implementation and sustainability. Many organizations try to maintain compliance and security controls but are vulnerable to attacks 3 to 6 months after validating security in channels in place. The long-term strategy is about validating sustainable security controls. The service framework also allows organizations to evaluate threats to the organization and vulnerabilities of the system software in use. 


The importance of staff diversity when it comes to information security

The information security team’s decision-making is diversified, which contributes to the organisation’s overall strength. The fact that each employee provides an unique perspective to the problem makes it simpler to recognise and address hidden vulnerabilities in the security operations, as well as to identify and correct the deficiencies of other employees, helping them to grow in their own areas of expertise. Consider the likelihood of a breach, as well as the “red team” that will be assigned to deal with the situation. SOC analyst looks for the logs, security engineer looks for the vulnerability and some other team members look for the defensive approach to defend against the vulnerability. As a result, in order to maintain a strong information security team, it is critical to have a diverse workforce. It facilitates task efficiency while encouraging alternative viewpoints. ... An organisation’s security cannot be handled by a single product, and in order to maintain and handle those security products, companies require a large number of employees. So, in order to work efficiently and effectively without being reliant on a single person or product, this should be common knowledge.


Is it right or productive to watch workers?

The recent rise in employee surveillance accelerated during the pandemic, largely because it had to, but the bottom line is that we are now more than ever accustomed to being watched. We accept the intrusion of cameras in novel spaces under the promise of increased safety; doorbell cameras spring to mind, but so too do webcams and smartphones; we accept data tracking to prove we’re “not a robot” on websites; we accept that our information, our clicks, and our preferences are observed and noted. We seem to be primed now to accept that companies have a reasonable expectation to protect their own safety, so to speak, by monitoring us. One recent survey by media researcher Clutch of 400 US workers found that only 22% of 18- to 34-year-old employees were concerned about their employers having access to their personal information and activity from their work computers. Meanwhile, in a pre-pandemic survey of US workers by US media group Axios from August 2019, 62% of respondents agreed that employers should be able to use technology to monitor employees.



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor