Daily Tech Digest - October 11, 2021

How businesses can combat data security and GDPR issues when working remotely

Whether using a business or personal device, having robust Secure Device Management and effective Mobile Device Management (MDM) is key to implementing security measures to keep data on mobile devices secure from threats. Adopting data encryption across software and devices being used remotely also allows data to be kept safe and secure from unauthorised use, even in the event of a security breach. In addition, implementing a corporate Virtual Private Network (VPN) enables an encrypted connection from a device to a network that allows the safe transmission of data from the office to remote working environments. Employees should have access only to the data they require to complete their work to mitigate against unnecessary risk of unauthorised access, with measures that restrict data on a ‘need-to-know’ basis implemented where possible. Crucially, companies should provide all employees working from home with a clear and documented remote working policy that outlines precisely how personal and company data should be handled to keep it secure.


Digital transformation: 4 excuses to leave behind

Outdated, manual, and siloed processes not only slow your business, but they boost costs because it is more expensive to maintain broken, outdated processes. As we emerge from the pandemic, most businesses are realizing that their existing business processes are not sustainable in the new normal. With remote and hybrid work becoming standard, organizations have had to think on their feet to maintain business as usual, and digital transformation makes this possible. COVID lockdowns made it urgent for enterprises to enable secure remote operations, which in turn made them realize the importance of migrating their operations to the cloud. There has been an exponential increase in the adoption of cloud technology post-pandemic. It has enabled businesses to operate in a remote environment without impacting the speed and quality of services. If you haven’t already done so, start by identifying the “low-hanging fruit” – i.e., processes that are best for your initial automation roadmaps. Then start scaling up. Transitioning to the cloud gives you countless possibilities, from reducing IT infrastructure costs to achieving scalability per business needs.


4 questions that get the answers you need from IT vendors

Enterprises don’t plan on how to adopt abstract technology concepts, they plan for product adoption and deployment. Network vendors who offer the products are the usual source of information, which can be delivered through news stories, vendor websites, or sales engagement. Enterprises expect the seller to explain why their product or service is the right idea, and sellers largely agree. It’s just a question of what specific sales process is supposed to provide that critical information. Technology salespeople, like all salespeople make their money largely on commissions. They call on prospects, pitch their products/services, and hopefully get the order. Their goal is a fast conversion from prospect to customer, and nearly all salespeople will tell you that they dread above all the “educational sell”. That happens when a prospect knows so little about the product/service being sold that they can’t make a decision at all and have to be taught the basics. The salesperson who’s teaching isn’t making commissions, and their company isn’t hitting their revenue goals.  


3 Things to Consider Before Investing in New Technology for Your Small Business

When you are searching for tech to suit your business's unique needs, it’s important to keep the happiness of your employees at the forefront. That’s what authentically attracts new talent to your company and entices people to stay. In many cases, happiness is derived from productivity. If workers know what they need to do but just don’t have the tools to do it quickly, they will get discouraged and customers will complain because they didn’t have a great experience. So, stop and assess why they’re experiencing each challenge as they move through tasks. Consider what you genuinely wish could be better or easier for you, your employees and everyone else involved. Then think about how technology may be able to solve each problem. If you equip a first-day employee with a mobile device that helps them get through a full inventory count comfortably and without making a single mistake, they are going to leave work feeling empowered. They’ll share their positive experience with friends, family and (if you’re lucky) social media. Word will spread about how great it is to work for your company.


Cloud Cost Optimization: A Pivotal Part of Cloud Strategy

To maintain an optimal state, you need to ensure that sound policies around budgeting are adhered to. In terms of Governance, the framework should oversee resource creation permissions as well. ... Once you gain visibility into spending metrics, you must observe which unused resources can be disposed of and which resources could be optimized. The journey for any cloud cost optimization starts with initial analyses of current cloud estate and identifying optimization opportunities across compute, network, storage, and other cloud-native features. Any cloud cost optimization framework needs to have a repository of cost levers with associated architecture and feature trade-offs. Businesses would need governance — the policies around budget adherence, resource creation permissions, etc. — to maintain an optimal state. A practical cost optimization framework requires all three of the above. Achieving initial savings would entail analyzing the estate and identifying optimization opportunities across compute, storage, and networking, focusing on the highest costs first and/or incremental/additional cost, month over month- cloud vendors provide access to the costs and utilization.


Applying Behavioral Psychology to Strengthen Your Incident Response Team

Orlando says it's natural for relationships to form, and for trust to form, in an incident response team and within a larger organization. In his experience, he often encounters what he calls the "rock star problem." "You've got one or a few people [who are] very, very capable, very knowledgeable, and the team sort of coalesces around those individuals," he says. "Which is not necessarily a bad thing, but it can create issues when those individuals inevitably move on, or maybe they [have] less than optimal work habits, or behaviors, or things we want to try to account for." Compounding CSIRTs' collaboration issues is a prominent focus on technical tools and skills, Orlando adds. Incident response teams are "often inundated" with tools to address technical problems in security and incident response; however, there is a "definite lack" of tools to address some of the social and collaboration challenges CSIRTs face in operating within the context of a multigroup, multiteam system as they need to do.


Netherlands Says Armed Forces May Combat Ransomware Attacks

Countries are being held accountable for their actions and inaction via diplomatic responses such as actions against cross-border criminal cyber operations and measures such as sanctions, which are more powerful if they are designed in a broad coalition context, Knapen says. "Within the EU, the Netherlands has therefore been a driving force behind the EU Cyber Diplomacy Toolbox and the adoption of the ninth EU cyber sanctions regime in May 2019, and the Netherlands is committed to further developing these instruments. This provides the EU with good tools to respond faster and more vigorously to cyber incidents. Recent EU statements and sanctions show that these instruments are delivering concrete results," he notes. Knapen is also pushing for diplomatic channels for bilateral cooperation between countries in judicial investigations against ransomware, which he says can be useful if cooperation through international judicial channels is insufficient. "The Netherlands can then emphasize the importance attached to cooperation through diplomatic channels," he says.


Can India Address the Growing Cybersecurity Challenges in the Nuclear Domain?

India has established several key agencies to counter the growing challenges on cybersecurity. However, the effectiveness of its cybersecurity policies in the nuclear domain lies with the ability to effectively incorporate cybersecurity, cyber infrastructure, and its operating agencies into the larger nuclear security framework. Efficient and effective cybersecurity mechanisms require cohesive inter-agency coordination to strengthen said mechanisms. It is also essential for government authorities to acknowledge, interact with, and evolve cybersecurity protocols and procedures regularly to reflect a rapidly changing security environment. An effective cybersecurity policy also requires clear demarcation of roles, responsibilities, and contingency plans for short and long-term implementation and altering based on circumstances and technological advancements. Additionally, and most importantly, a renewed emphasis on understanding cyber risks and acknowledging the importance of cyber-nuclear security is essential in the Indian context.


How technology can drive positive change in insurance post-COVID

From forced closures to operational transformation, the COVID-19 pandemic has impacted businesses both UK and worldwide. The world of insurance is no exception to this rule – but the nature of the industry and its interests have led to a layered set of challenges and opportunities beyond the obvious disruptions to working practices. These challenges have been laid out in a recent report from EY, which lists a number of early pandemic issues for the industry including the tricky transition to remote working, a “strong push toward digitisation”, and the embrace of virtual interactions for clients and distribution partners. While these concerns may feel familiar, EY’s report goes on to draw out the specific difficulties faced by insurers, where COVID-19 has occasioned “mounting consumer, political, and legislative pressure to cover pandemic-related business interruption claims”. Not only has the industry needed to embrace new technologies and practices to adapt to the pandemic, but it has also needed to address some of the COVID-driven burdens faced by clients. 


Safe and secure disposal of end-of-life IT hardware

First, your business needs to develop a plan of action that brings together your IT, information security and office management staff, with oversight from senior executives. To be fully effective, it should establish a decommissioning strategy that covers the compliant disposal of retired hardware and the destruction of data. Next, you need to ensure that all the data on your old hardware has been permanently eradicated and is non-recoverable. Given the importance of this step, it is likely that you’ll need assistance from a third-party disposition expert. Third, you need to know the whereabouts of your assets throughout the disposition process. A secure chain of custody is vital to prove compliance and so, once again, it is advisable to employ the services of an outside expert – a company that offers rigorous security practices, such as asset itemisation, GPS tracking and protected transportation, all backed up with supporting documentation. Having a secure chain of custody is critical because it ensures that the IT assets are tracked during each step of the process from pick-up to final disposition.



Quote for the day:

"The final test of a leader is that he leaves behind him in other men, the conviction and the will to carry on." -- Walter Lippmann

Daily Tech Digest - October 10, 2021

Data Science Process Lifecycle

When you’re SO focused on tech and coding, it can be easy to lose sight of the actual business goal and vision. You might start spinning your wheels, going off on tangents, and overall contributing to business inefficiencies - often without noticing. Not to mention, having to execute projects without a firm understanding of your place in the company’s vision and without a strategy for forward momentum can be downright frustrating and inefficient. ... How are data pros supposed to excel without strong leadership and frameworks to guide them in their execution? We need to make sure that as data implementation folks, we keep our eyes on the prize. And as leaders, we need to make sure data implementation workers are included in the overarching strategy from the get-go. If you’re ready to make sure the data projects you work on always stay on track and profitable, let’s dive into the data science process lifecycle framework. ... Essentially, the data science process lifecycle is a structure through which you can manage the implementation of your data initiatives. It allows those who work in data implementation to see where their role first comes into the bigger picture of the project, and ensures there’s a cohesive management structure.


Distributed transaction patterns for microservices compared

Having a monolithic architecture does not imply that the system is poorly designed or bad. It does not say anything about quality. As the name suggests, it is a system designed in a modular way with exactly one deployment unit. Note that this is a purposefully designed and implemented modular monolith, which is different from an accidentally created monolith that grows over time. In a purposeful modular monolith architecture, every module follows the microservices principles. Each module encapsulates all the access to its data, but the operations are exposed and consumed as in-memory method calls. With this approach, you have to convert both microservices (Service A and Service B) into library modules that can be deployed into a shared runtime. You then make both microservices share the same database instance. Because the services are written and deployed as libraries in a common runtime, they can participate in the same transactions. Because the modules share a database instance, you can use a local transaction to commit or rollback all changes at once. 


How disagreement creates unity in open source

For something to be learned in a disagreement, both sides must be open to different perspectives. I once coached an engineer who had strong opinions and constantly found himself in decision gridlock. Team meetings became so tense that we couldn't get past even the first agenda item before the hour was up. This engineer was frustrated and wanted to know why he couldn't convince people of his ideas. My advice surprised him: he should allow himself to be convinced as much as he tried to convince others. When he applied this advice, it became noticeably easier to make progress in meetings. Because other team members felt respected, we were arguing less and focusing more on how to reach our goals as a team. When you focus solely on advocating for your own ideas, you are more likely to miss the critical points seen by others, however unintentionally. Having a collaborative mindset keeps disagreement healthy. A collaborative mindset means prioritizing the needs of the team or community rather than the individual. When these needs fall out of balance, having a shared purpose can recenter a team. It's not about being right; it's about doing right by the group.


Microservices Adoption and the Software Supply Chain

What we continue to call technical debt is really the activities that are related to tending to and upgrading our software when third-party components are evolving or have common vulnerabilities and exposures (CVEs) and need to be upgraded. These are tedious, repetitive tasks that usually fall to the most experienced engineers as they require technical expertise to do correctly. Such activities can paralyze engineering organizations and are a tremendous burden on engineers; that often leads to burnout. Up to 30% of engineering time is spent on technical debt. The perception that somehow developers were responsible for accruing this technical debt and are doing something wrong that prevents them from keeping up is hugely demoralizing and demotivating. However, if we reframe technical debt as software supply chain management and stop blaming engineering for it, we can make maintenance more predictable and consistent. By taking steps like inventorying third-party components and determining how pervasive they are in the application, an organization can arrive at a maintenance estimate.


When it Comes to Ransomware, Should Your Company Pay?

Theoretically, if organizations pay the ransom, the attackers will provide a decryption tool and withdraw the threat to publish stolen data. However, payment doesn’t guarantee all data will be restored. Executives need to carefully consider the realities of ransomware, including: On average, only 65% of the data is recovered, and only 8% of organizations manage to recover all data; Encrypted files are often unrecoverable. Attacker-provided decrypters may crash or fail. You may need to build a new decryption tool by extracting keys from the tool the attacker provides; Recovering data can take several weeks, particularly if a large amount of it has been encrypted; There is no guarantee that the hackers will delete the stolen data. The could sell or disclose the information later if it has value. Ransomware is a sustainable and lucrative business model for cybercriminals, and it puts every organization that uses technology at risk. In many cases, it is easier and cheaper to pay the ransom than to recover from backup. But supporting the attackers’ business model will only lead to more ransomware.


Open source for good

In the pandemic, open source has been critical. It has touched billions of lives, and it has saved lives. I saw this unfold daily at SUSE, which specialises in bringing open source software to business. I marvelled at the importance of universal access to critical code to design contact-tracing technology, helping unravel the complexities of the virus’s path across the planet. When Singapore led the world implementing contact-tracing, open source made it possible. When large-scale Covid-19 testing and analysis became available, open source made it possible (and we are proud to have empowered our customer, Ruvos, to achieve this). When healthcare organisations needed a cost-effective way to analyse torrents of data at a moment’s notice, open source made it possible. Open source pervades our lives. It is a remarkable, often unsung, force for good. Open source software is embedded in mammogram machines, it powers autonomous driving to make people safer on the road, air traffic control systems at airports, and weather forecasting technology to warn of storms and even earthquakes. 


5 principles for your cloud-oriented open-source strategy

Practitioners on your team should investigate projects that have the potential to solve a “job to be done” for your business. What they turn up may need more time to bake before it can be used in a meaningful way at your company, but if a project isn’t immediately useful, star the repo and keep tabs on the project. More importantly, make sure your engineers have time to learn and try new things every week and even to contribute to open-source projects. It can do wonders for morale, retention, and recruitment, and if the open-source projects are ones that your business depends on, the benefits multiply. ... It’s easy to have more open-source tech in your IT organization than you realize. Using open-source software is often the easiest way for an engineer to add a feature to in-house software or fix a bug in third-party software. While open-source proliferation means your team is finding creative ways to solve business problems, you need to understand what technology is being used and how it affects your organization.


Enterprise architecture and the sustainability puzzle

When it comes to revolutionizing digital infrastructure, the opportunity to increase sustainability and lower emissions lies directly with Enterprise Architecture (EA) teams and related disciplines. In short, the purpose of these teams is to create sustainable organizations delivering business objectives supported by modern digital platforms. This integrated perspective enables business and IT executives to quickly develop an understanding of where change is required and the impact this will have. So as we look to achieve the United Nations’ goals for sustainable development, EA’s overall targets should therefore include sustainable IT practices. A particular goal EA should help organizations achieve includes goal 9, which highlights the need to ‘build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation.’ Striving to achieve this goal won’t be a simple fix, but should certainly be seen as an opportunity for a profound, systemic shift to a more sustainable economy that works for both people and the planet.


Cybersecurity Risk Management: Are Your Enterprise Architecture and Security Teams Lacking Engagement?

The Digital Twin of an Organization (DTO) gives you a virtual representation of your organization, showing how the company performs as a system. It’s also a highly effective communication tool. With it, you’re able to visualize ongoing projects and see where they overlap. In addition, it tracks processes, systems, and information. The DTO modeling can be expanded with Scenarios. Using Scenarios, you can focus on various points to map out potential futures, including risk scenarios. From a risk management and security perspective, you can see what would happen if a critical system went down, which departments would be paralyzed, and how they could continue to function. For example, when mapping out the cybersecurity risk of ransomware hits, the Digital Twin of an Organization could give you a clear overview of which parts of the organization are most exposed and show how the attack could develop. Let’s say there’s a new virus affecting laptops that aren't fully patched. You can easily identify which parts have the most unpatched laptops.


How To Leverage Enterprise Architects As CXO Advisors

Enterprise architects create holistic transparency and expertise across all layers, from business to IT and infrastructure landscapes. They also proactively identify the potential for optimization when it comes to business models and processes. The most important value that enterprise architects bring to the table is overseeing technology selection and the design of solution architecture. This showcases the increasing importance of enterprise architecture in shaping the agenda of CXOs (chief experience officers). Enterprise architects take a holistic approach in design, planning, implementation and KPI measurement. They are able to fully understand the business strategy and identify needed changes, as well as additional technological capabilities that are required. They not only indentify the requirements but can plan and implement them. This is done by being cognizant of the organizational strategy, business environment, stakeholder interests, constraints and risks. Finally, they ensure that the relevant outcomes are achieved or if course corrections are needed.



Quote for the day:

"Leaders must see the dream in their mind before they will accomplish the dream with their team." -- Orrin Woodward

Daily Tech Digest - October 09, 2021

Looking ahead to the API economy

While established companies invest in new APIs to support digital transformation projects, early startups build on top of the latest technology stacks. This trend is turning the Internet into a growing fabric of interconnected technologies the likes of which we've never seen. As the number of new technologies peaks, the underlying fabric — otherwise known as the API economy — fuels the market to undergo technology consolidations with the historic-high number of acquisitions. There are two interesting consequences of this trend. The first is that all of this drives the need for better, faster, and easier-to-understand APIs. Many Integration-Platform-as-a-Service (iPaaS ) vendors understand this quite well. Established iPaaS solutions, such as those from Microsoft, MuleSoft, and Oracle, are continually improved with new tools while new entrants, like Zapier and Workato, continue to emerge. All invest in simplifying the integration experience on top of APIs, essentially speeding the time-to-integration. Some call these experiences "connectors" while others call them "templates." 


Is Artificial Intelligence Taking over DevOps?

As a consequence of the utility of AI tools, they have been widely and rapidly adopted by all but the most stubborn DevOps teams. Indeed, for teams now running several different clouds (and that’s all teams, pretty much) AI interfaces have become almost a necessity as they evolve and scale their DevOps program. The most obvious and tangible outcome of this shift has been in the data and systems that developers spend their time looking at. It used to be that a major part of the role of the operations team, for instance, was to build and maintain a dashboard that all staff members could consult, and which contained all of the apposite data on a piece of software. Today, that central task has become largely obsolete. As software has grown more complex, the idea of a single dashboard containing all the relevant information on a particular piece of software has begun to sound absurd. Instead, most DevOps teams now make use of AI tools that “automatically” monitor the software they are working on, and only present data when it is clear that something has gone wrong.


Five Functions That Benefit From Cybersecurity Automation

Defending against cybersecurity threats is very expensive, said Michael Rogers, operating partner at venture capital firm Team8 and former director of the U.S. National Security Agency. But the costs for attackers are low, he told Data Center Knowledge. "Prioritizing cybersecurity solutions that provide smart, cost-effective ways to reduce, mitigate or even prevent cyberattacks is key," he said. "Inevitably, as we move to an increasingly digital world, these options are game-changers in safeguarding our society and digital future.” Some areas where cybersecurity automation is making a particular difference include incident response, data management, attack simulation, API and certificate management, and application security. ... "A lot of machine learning is being thrown at huge data sets," he said. "The analytics are getting better. And what do you do with that analysis? You want to do threat detection and response, you want to bring the environment back to a safer operating state. Now, these new tools are able to do a lot of this automatically."


Minimizing Design Time Coupling in a Microservice Architecture

To deliver software rapidly, frequently, and reliably, you need what I call the success triangle. You need a combination of three things: process, organization, and architecture. The process, which is DevOps, embraces concepts like continuous delivery and deployment, and delivers a stream of small changes frequently to production. You must structure your organization as a network of autonomous, empowered, loosely coupled, long-lived product teams. You need an architecture that is loosely coupled and modular. Once again, loose coupling is playing a role. If you have a large team developing a large, complex application, you must typically use microservices. That's because the microservice architecture gives you the testability and deployability that you need in order to do DevOps, and it gives you the loose coupling that enables your teams to be loosely coupled. I've talked a lot about loose coupling, but what is that exactly? Operations that span services create coupling between them. Coupling between services is the degree of connectedness.


D-Wave took its own path in quantum computing. Now it’s joining the crowd.

Although D-Wave was the first company to build a working quantum computer, it has struggled to gain commercial traction. Some researchers, most notably computer scientist Scott Aaronson at the University of Texas at Austin, faulted the company for over-hyping what its machines were capable of. (For a long time, Aaronson cast doubt on whether D-Wave's annealer was harnessing any quantum effects at all in making its calculations, although he later conceded that the company's machine was a quantum device.) In the past few years, the company has also had trouble exciting investors: in March, it secured a $40 million grant from the Canadian government. But that came after The Globe & Mail newspaper reported that a financing round in 2020 had valued the company at just $170 million, less than half of its previous $450 million valuation. The company's decision to add gate-model quantum computers to its lineup may be an acknowledgment that commercial momentum seems to be far greater for those machines than for the annealers that D-Wave has specialized in.


What SREs Can Learn From Facebook’s Largest Outage

Facebook was clearly prepared to respond to this incident quickly and efficiently. If it wasn’t, it would no doubt have taken days to restore service following a failure of this magnitude rather than just hours. Nonetheless, Facebook has reported that troubleshooting and resolving the network connectivity issues between data centers proved challenging for three main reasons. First and most obviously, engineers struggled to connect to data centers remotely without a working network. That’s not surprising: as an SRE, you’re likely to run into an issue like this sooner or later. Ideally, you’ll have some kind of secondary remote-access solution, but that’s hard to implement within the context of infrastructure like this. The second challenge is more interesting. Because Facebook’s data centers “are designed with high levels of physical and system security in mind,” according to the company, it proved especially difficult for engineers to restore networking even after they went on-site at the data centers.


Becoming a new chief information security officer today: The steps for success

As a new CISO, you should evaluate existing policies including cyber insurance, representation from legal teams, connections with incident response (IR) -- and also who is handling the firm's PR. Insurance providers may list recommended or approved IR and legal responders, and so CISOs need to make sure an organization's teams are either on the permissible list, or added to them. What is included in cyber insurance policies should also be explored. For example, does it cover ransomware infections or data theft and extortion, and if so, what is the limit of potential claims? You should also find out if you are covered when it comes to liability should you become part of a lawsuit due to a cybersecurity incident -- and whether or not the same applies to your team. ... Questions should be asked at leadership meetings which will give new security officers a fighting chance to perform well in their roles. This includes what cybersecurity budget is available -- and this is separate or part of general IT budgets -- and has there been an increase year-over-year?


How Do You Choose the Best Test Cases to Automate?

While automation frees up the tester’s time, organizations and individuals often overlook a crucial aspect of testing - the cost and time required to maintain the automated tests. If there are significant changes to the backend of your application, often writing and rewriting the code for automated tests is just as cumbersome as manual testing. One interesting way to tackle this is for test engineers to automate just enough to understand which part of the program is failing. You can do this by automating the broader application tests so that if something does break, you know exactly where to look. Smart test execution, one of the top trends in the test automation space, does exactly this by identifying the specific tests that need to be executed. ... How complex is the test suite you’re trying to automate? If the test results need to be rechecked with a human eye or need to have actual user interaction, automating it probably won’t help a lot. For example, user experience tests are best left unautomated because a testing software can never mimic human emotion while using a product.


The Cyber Insurance Market in Flux

Early cyber insurance policies only required filling out surveys on existing protocols. Now, insurers are moving toward active verification. “We need to be able to have a little more substantive evidence that you've done what you're saying you’re going to do,” says Soo. “This dynamic is causing a much-needed maturation in how the insurance industry is thinking about cybersecurity risks,” McNerny argues. “They are now thinking a lot harder about the kinds of controls they’d like to see in place.” Multi-factor authentication is among the primary cyber hygiene practices that is emerging as an industry standard. Reduction of attack surface, protection of credentials, and network segmentation will likely become necessary to secure coverage as well. And not all these factors will be the responsibility of a given organization’s cyber security team. According to McNerny, implementation will require a cultural shift. All employees need to be educated on how to prevent these attacks. “We often think in terms of technology,” he says.


Researchers discover ransomware that encrypts virtual machines hosted on an ESXi hypervisor

The investigation revealed that the attack began at 12:30 a.m. on a Sunday, when the ransomware operators broke into a TeamViewer account running on a computer that belonged to a user who also had domain administrator access credentials. According to the investigators, 10 minutes later, the attackers used the Advanced IP Scanner tool to look for targets on the network. The investigators believe the ESXi Server on the network was vulnerable because it had an active Shell, a programming interface that IT teams use for commands and updates. This allowed the attackers to install a secure network communications tool called Bitvise on the machine belonging to the domain administrator, which gave them remote access to the ESXi system, including the virtual disk files used by the virtual machines. At around 3:40 a.m., the attackers deployed the ransomware and encrypted these virtual hard drives hosted on the ESXi server. “Administrators who operate ESXi or other hypervisors on their networks should follow security best practices. ...” said Brandt.



Quote for the day:

"It is our choices that show what we truly are, far more than our abilities." - J.K. Rowling

Daily Tech Digest - October 07, 2021

Encryption: Why security threats coast under the radar

This application of AI became a valuable source IT expertise that multiplied staff bandwidth to manage the solution and allowed for a full and complex monitoring of the entire networked environment. With Flowmon ADS in place, the institute has a comprehensive, yet noise-free overview of suspicious behaviours in the partner networks, flawless detection capability, and a platform for the validation of indicators of compromise. Flowmon’s solution works at scale too. GÉANT – which is a pan-European data network for the research and education community – is one of the world’s largest data networks, and transfers over 1,000 terabytes of data per day over the GÉANT IP backbone. For something of that scale there is simply no way to manually monitor the entire network for aberrant data. With a redundant application of two Flowmon collectors deployed in parallel, GÉANT was able to have a pilot security solution to manage data flow of this scale live in just a few hours. With a few months of further testing, integration and algorithmic learning, the solution was then ready to protect GÉANT’s entire network from encrypted data threats.


In The Digital Skills Pipeline, A Shift Away From Traditional Hiring Modes

“As digital transformation accelerates and we experience generational shifts, professionals will increasingly desire better work-life balance and freedom from legacy in-office models,” says Saum Mathur, chief product, technology and AI officer with Paro. “Consultancies and others that are reliant on legacy models are struggling to adapt to this new reality, and marketplaces are only furthering these models’ disruption. Three to five years ago, the gig economy pioneers offered customers finite, task-based services that didn’t require extensive experience and enabled flexible scheduling. With continued shifts in the technical and cultural landscape, the gig economy has been extended into professional services, which is powered by highly experienced subject matter experts of all levels.” Corporate culture needs to be receptive to the changes wrought by digital transformation. Forty-one percent of executives in the Alliantgroup survey have encountered employee resistance, while 32$ say they have had “the wrong team or department overseeing initiatives.”


Remote-working jobs: Disaster looms as managers refuse to listen

The Future Forum Pulse survey echoed a sentiment that has been voiced repeatedly over the past 18 or so months: employees have embraced remote working, and see it as a pillar of their future working preferences. Yet executives are more likely than lower-level workers to be in favour of a working week based heavily around an office. Of those surveyed, 44% of executives said they wanted to work from the office every day, compared to just 17% of employees. Three-quarters (75%) of executives said they wanted to work from the office 3-5 days a week, versus 34% of employees. This disconnect between employer and employee preferences risks being entrenched into new workplace policies, researchers found. Two-thirds (66%) of executives reported they were designing post-pandemic workforce plans with little to no direct input from employees – and yet 94% said they were "moderately confident" that the policies they had created matched employee expectations. What's more, more than half (56%) of executives reported they had finalized their plans on how employees can work in the future. 


Will the cloud eat your AI?

"CSPs' cloud and digital services have given them access to the enormous amounts of data required to effectively train AI models," the authors concluded. Such economies of scale have been an asset to the cloud providers for years. Years ago, RedMonk analyst Stephen O'Grady highlighted the "relentless economies of scale" that the cloud providers brought to hardware–they could simply build more cheaply than any enterprise could hope to replicate in their own data centers. Now the CSPs enjoy a similar advantage with data. But it's not merely a matter of raw data. The CSPs also have more experience using that data on a large scale. The CSPs have products (e.g., Amazon Alexa to assist with natural language processing, or Google Search to help with recommendation systems). Lots of data feeding ever-smarter applications feeding more data into the applications... it's a self-reinforcing cycle. Oh, and that hardware mentioned earlier? The CSPs also have more experience tuning hardware to process machine learning workloads at scale. 


Operationalizing machine learning in processes

Operationalizing ML is data-centric—the main challenge isn’t identifying a sequence of steps to automate but finding quality data that the underlying algorithms can analyze and learn from. This can often be a question of data management and quality—for example, when companies have multiple legacy systems and data are not rigorously cleaned and maintained across the organization. However, even if a company has high-quality data, it may not be able to use the data to train the ML model, particularly during the early stages of model design. Typically, deployments span three distinct, and sequential, environments: the developer environment, where systems are built and can be easily modified; a test environment (also known as user-acceptance testing, or UAT), where users can test system functionalities but the system can’t be modified; and, finally, the production environment, where the system is live and available at scale to end users.


MLOps essentials: four pillars for Machine Learning Operations on AWS

Managing code in Machine Learning appliances is a complex matter. Let’s see why! Collaboration on model experiments among data scientists is not as easy as sharing traditional code files: Jupyter Notebooks allow for writing and executing code, resulting in more difficult git chores to keep code synchronized between users, with frequent merge conflicts. Developers must code on different sub-projects: ETL jobs, model logic, training and validation, inference logic, and Infrastructure-as-Code templates. All of these separate projects must be centrally managed and adequately versioned! For modern software applications, there are many consolidated Version Control procedures like conventional commit, feature branching, squash and rebase, and continuous integration. These techniques however, are not always applicable to Jupyter Notebooks since, as stated before, they are not simple text files. Data scientists need to try many combinations of datasets, features, modeling techniques, algorithms, and parameter configurations to find the solution which best extracts business value.


Why Unsupervised Machine Learning is the Future of Cybersecurity

There are two types of Unsupervised Learning: discriminative models and generative models. Discriminative models are only capable of telling you, if you give it X then the consequence is Y. Whereas the generative model can tell you the total probability that you’re going to see X and Y at the same time. So the difference is as follows: the discriminative model assigns labels to inputs, and has no predictive capability. If you gave it a different X that it has never seen before it can’t tell what the Y is going to be because it simply hasn’t learned that. With generative models, once you set it up and find the baseline you can give it any input and ask it for an answer. Thus, it has predictive ability – for example it can generate a possible network behavior that has never been seen before. So let’s say some person sends a 30 megabyte file at noon, what is the probability that he would do that? If you asked a discriminative model whether this is normal, it would check to see if the person had ever sent such a file at noon before… but only specifically at noon.


Sorry, Blockchains Aren’t Going to Fix the Internet’s Privacy Problem

Recently, a number of blockchain-based companies have sprung up with the vision of helping people take control of their data. They get an enthusiastic reception at conferences and from venture capitalists. As someone who cares deeply about my privacy, I wish I thought they stood a better chance of success, but they face many obstacles on the road ahead. Perhaps the biggest obstacle I see for personal-data monetization businesses is that your personal information just isn’t worth that much on its own. Data aggregation businesses run on a principle that’s sometimes referred to as the “river of pennies.” Each individual user or asset has nearly zero value, but multiply the number of users by millions and suddenly you have something that looks valuable. That doesn’t work in the reverse, however. Companies are far more focused and disciplined in the pursuit of millions of dollars in ad or data revenue than one consumer trying to make $25 a year. But why isn’t your data worth that much? Very simply, the world is awash in your information, and you’re not the only source of that information. The truth is that you leak information constantly in a digital ecosystem.


Iranian APT targets aerospace and telecom firms with stealthy ShellClient Trojan

The Trojan is created with an open-source tool called Costura that enables the creation of self-contained compressed executables with no external dependencies. This might also contribute to the program's stealthiness and to why it hasn't been discovered and documented until now after three years of operation. Another possible reason is that the group only used it against a small and carefully selected pool of targets, even if across geographies. ShellClient has three deployment modes controlled by execution arguments. One installs it as a system service called nhdService (Network Hosts Detection Service) using the InstallUtil.exe Windows tool. Another execution argument uses the Service Control Manager (SCM) to create a reverse shell that communicates with a configured Dropbox account. A third execution argument only executes the malware as a regular process. This seems to be reserved for cases where attackers only want to gather information about the system first, including which antivirus programs are installed, and establish if it's worth deploying the malware in persistence mode.


How financial services can invest in the future with predictive analytics

Predictive analytics empowers users to make better decisions that consider what has happened and what is likely to happen based on the available data. And those decisions can only be made if employees understand what they’re working with. They need good data literacy competencies to understand, challenge, and take actions based on the insights, with greater abilities to realise the limitations and question the output of predictive analytics. After all, a forecast’s accuracy depends on the data fuelling it, so its performance could be impacted during an abnormal event or by intrinsic bias in the dataset. Employees must have confidence in their understanding of the data to question its output. This is particularly true when decisions could directly impact customers’ lives, particularly the influential impact of those made in the financial sector – from agreeing to an overdraft and making it to payday to approving a mortgage application in time. 



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley

Daily Tech Digest - October 06, 2021

Deep Learning's Diminishing Returns

While deep learning's rise may have been meteoric, its future may be bumpy. Like Rosenblatt before them, today's deep-learning researchers are nearing the frontier of what their tools can achieve. To understand why this will reshape machine learning, you must first understand why deep learning has been so successful and what it costs to keep it that way. ... Deep-learning models are overparameterized, which is to say they have more parameters than there are data points available for training. Classically, this would lead to overfitting, where the model not only learns general trends but also the random vagaries of the data it was trained on. Deep learning avoids this trap by initializing the parameters randomly and then iteratively adjusting sets of them to better fit the data using a method called stochastic gradient descent. Surprisingly, this procedure has been proven to ensure that the learned model generalizes well. The success of flexible deep-learning models can be seen in machine translation. For decades, software has been used to translate text from one language to another. Early approaches to this problem used rules designed by grammar experts.


IT security and cybersecurity: What's the difference?

Information technology focuses on the systems that store and transmit digital information. Cybersecurity, in contrast, focuses on protecting electronic information stored within those systems. Cybersecurity usually focuses on digital information and infrastructure. Infrastructure may include internet connections and local area networks that store and share information. In short, cybersecurity focuses on preventing hackers from gaining digital access to important data on networks, on computers, or within programs. Workers in IT and cybersecurity have varying job titles depending on their education, training, experience, and responsibilities. One subset of IT, IT security, focuses on protecting access to computers, networks, and information. IT security professionals may create plans to protect digital assets and monitor computer systems and networks for threats. They may also work to protect the physical equipment storing the data, along with the data itself. Another subset of IT, information security, focuses on securing data and systems against unauthorized access. 


How to quit your job and start your business in 90 days

Giving up is a straightforward decision that requires courage, boldness, and a strong belief in what you are about to do. But on the other hand, having a job you don't like can be the worst death sentence for your happiness and personal fulfillment. Quitting your job should be done wisely and in a balanced way, and building a business that replaces the security of income from your previous job is an art. ... Stopping working for someone else doesn't automatically make you able to work for yourself, but it does qualify you to try. Starting a business is like planning an expedition to Mount Everest. Climbing the highest peak in the world requires money, training, a year of planning, and only 49% of those who attempt it make it to the top. A dream without a deadline is a wish. Sitting for months contemplating your idea is one of the worst passive tactics to avoid compromise. Set a date to quit your job and dedicate yourself full time to your business. Just as it is important to set a start date, it is just as important to designate an end date. A date in which with maturity and wisdom you can say "this is not working."


How one coding error turned AirTags into perfect malware distributors

“Security consultant and penetration tester Bobby Rauch discovered that Apple's AirTags — tiny devices which can be affixed to frequently lost items like laptops, phones, or car keys — don't sanitize user input. This oversight opens the door for AirTags to be used in a drop attack. Instead of seeding a target's parking lot with USB drives loaded with malware, an attacker can drop a maliciously prepared AirTag,” the publication reported. “This kind of attack doesn't need much technological know-how — the attacker simply types valid XSS into the AirTag's phone number field, then puts the AirTag in Lost mode and drops it somewhere the target is likely to find it. In theory, scanning a lost AirTag is a safe action — it's only supposed to pop up a webpage at https://found.apple.com/. The problem is that found.apple.com then embeds the contents of the phone number field in the website as displayed on the victim's browser, unsanitized.” The worst part about this hole is that the damage it can inflict is only limited by the attacker’s creativity. 


Why today’s cybersecurity threats are more dangerous

Unlike 20 years ago, when even extensive IT systems were comparatively standalone and straightforward, the interdependencies of systems now make dealing with and defending against threats a much more difficult proposition. "The core problem here is complexity and our interdependence," Snyder said. "That is something that we're not going to move away from because that is providing us flexibility and functionality and all these other critical functions that we need. We've got a growing problem here." One new variable thrown into the digital mix is the meteroic growth of ransomware, which makes it appear that cyberattacks are getting worse. "I think that the ransomware attackers have found a perfectly successful illegitimate business model," Rand Corporation researcher Jonathan Welburn said. "Every time there's a large-scale attack, we see that [victims] issue a payment, and it solves the problem. It's a really good advertisement for that business model." Jay Healey, a senior research scholar at Columbia University, said that at one level, cybersecurity risks are unchanged from what they were two decades ago. "We've been here before," he said. 


The insecure application conundrum: how to stop the influx of vulnerable applications

The fundamental root cause of application insecurities can be attributed to the fact that security awareness training for developers is virtually non-existent. Developers do not willingly deploy applications in the hope that exploits are never found. Instead, there still exists a lack of exposure and experience that plays a part in them not understanding the actual severity of some of the vulnerabilities. At the same time, there is a global shortage of experienced developers, as evidenced, by the fact that vacancies for application development security developers are set to grow 164% in the next five years. Finding an experienced developer with a rounded skillset is like finding a needle in a haystack. As a result, for businesses, there is more economic value in investing in the training of developers in cyber security to build their competence at secure development methods, linked to their business. In essence, there are two major ways to distinguish how vulnerabilities are caused – through technical vulnerabilities and business logic flaws.


Facebook outage was a series of unfortunate events

Facebook says the root cause of its outage Monday involved a routine maintenance job gone awry that resulted in rendering its DNS servers unavailable, but first the entire Facebook backbone network had crashed. To make matters worse, the loss of DNS made it impossible for Facebook engineers to remotely access the devices they needed to in order to bring the network back up, so they had to go into the data centers to manually restart systems. That slowed things down, but they were slowed down even more because the data centers have safeguards in place to make tampering hard—for anybody. “They’re hard to get into, and once you’re inside, the hardware and routers are designed to be difficult to modify even when you have physical access to them,” according to a Facebook blog written by Santosh Janardhan, the company's vice president of engineering and infrastructure. It took time, but once the systems were restored, the network came back up. Restoring the customer-facing services that run over the network was another lengthy process because turning them up all at once could cause another round of crashes. 


The Three Symptoms of Toxic Leadership and How to Get Out of It

Toxicity has eaten deep into the very fabric of what is standard in the workplace. Why is it okay for people to use swear words and hate on one another, but not okay to use words such as love and appreciation? Why has what is supposed to be the norm now considered or seen as being “out there”? That's not right, and a change in this thought pattern is long overdue. Now is the time to educate everyone on the importance of speaking right, doing right, treating each other right in the workplace, and above all, being a nontoxic leader. It’s time we stop being toxic leaders and take action. Once I started studying and analyzing my own toxic traits, I was able to come out of it. And now, I help other successful leaders in tech do the same. For example, I was once working with an engineering manager at a start-up company. She worked around the clock to provide everything for her team. She did sufficient training, was nice to everyone, and provided all the support she possibly could.


Hybrid work: 9 ways to encourage healthy team conflict

Diversity of thought leads to better solutions in the end. “Leaders of high-performing teams consistently convey the importance of conflict and push the team to engage in constructive debate, even to the point that the tension makes team members uncomfortable, to generate the best decisions,” says Andy Atkins, practice leader at BTS Boston. This can be trickier in the hybrid world. “It is more difficult to gauge team members’ reactions, or test the temperature in the room, and it is easier for team members themselves to withdraw from the conversation,” says Atkins. Therefore, leaders must be more deliberate in creating a culture that encourages speaking up. The most successful leaders not only model the willingness to face conflict themselves, but also help team members express their own points of view. “It helps if the team leader takes care to reserve his or her own observations in discussions to allow others to speak first, and to deliberately draw out different opinions around the table before moving on,” says Atkins.


Critical infrastructure IoT security: Going back to basics

Ultimately, IoT devices weren’t built with security in mind. The vast amount of IoT devices tend to be poorly secured, often functioning with out-of-date software or using default security configurations which makes it a vulnerable target for threat actors. The fact is that until the last 5 or 10 years, security wasn’t even something considered as a part of developing OT. It’s not like a hospital buys a new MRI machine every year, so that 10-year-old MRI machine in the hospital is still highly vulnerable since it was built in a time when security wasn’t important or thought of. It is unsurprising that the vulnerability of IoT and the critical infrastructure landscape as a whole to cyberattacks is becoming a growing concern within the security landscape and recent attacks on the sector have proven the need to ramp up security efforts. Even though IoT is becoming an increasing target, the focus on many recent attacks is on OT infrastructure. For that reason, the critical infrastructure industry must take a security-first stance to security their operations. 



Quote for the day:

"Leaders keep their eyes on the horizon, not just on the bottom line." -- Warren G. Bennis

Daily Tech Digest - October 05, 2021

How cloud-native apps and microservices impact the development process

One of the more important coding disciplines in object-oriented programming and SOA is code refactoring. The techniques allow developers to restructure code as they better understand usage considerations, performance factors, or technical debt issues. Refactoring is a key technique for transforming monolithic applications into microservices. Refactoring strategies include separating the presentation layer, extracting business services, and refactoring databases. Robin Yeman, strategic advisory board member at Project and Team, has spent most of her career working on large-scale government and defense systems. Robin concedes, “The largest technology barriers to utilizing agile in building or updating complex legacy systems are the many dependencies in the software architecture, forcing multiple handoffs between teams and delays in delivery.” Robin suggests that refactoring should focus on reducing dependencies. She recommends, “Refactoring the software architecture of large legacy systems to utilize cloud-native applications and microservices reduces dependencies between the systems and the teams supporting them.”


Web3 Architecture and How It Compares to Traditional Web Apps

According to Kasireddy, backend programming for a dapp is entirely different than for a traditional web application. In Web3, she writes, “you can write smart contracts that define the logic of your applications and deploy them onto the decentralized state machine [i.e. the Ethereum blockchain].” Web servers and traditional databases, in this paradigm, are no longer needed — since everything is done on, or around, the blockchain. She notes a bit later in the post that “Smart contracts are written in high-level languages, such as Solidity or Vyper.” Solidity was partly inspired by ECMAScript syntax, so it has some similarities to JavaScript (but is very different in other ways). As for the frontend, that “pretty much stays the same, with some exceptions,” writes Kasireddy. ... There are also complications when it comes to “signing” transactions, which is the cryptographic process that keeps blockchains secure. You need a tool like MetaMask to handle this.


UEFI threats moving to the ESP: Introducing ESPecter bootkit

Even though Secure Boot stands in the way of executing untrusted UEFI binaries from the ESP, over the last few years we have been witness to various UEFI firmware vulnerabilities affecting thousands of devices that allow disabling or bypassing Secure Boot. This shows that securing UEFI firmware is a challenging task and that the way various vendors apply security policies and use UEFI services is not always ideal. Previously, we have reported multiple malicious EFI samples in the form of simple, single-purpose UEFI applications without extensive functionality. These observations, along with the concurrent discovery of the ESPecter and FinFisher bootkits, both fully functional UEFI bootkits, show that threat actors are not relying only on UEFI firmware implants when it comes to pre-OS persistence, but also are trying to take advantage of disabled Secure Boot to execute their own ESP implants. We were not able to attribute ESPecter to any known threat actor, but the Chinese debug messages in the associated user-mode client component leads us to believe with a low confidence that an unknown Chinese-speaking threat actor is behind ESPecter.


Business Leadership Changed: The New Skills You Must Master

Strategic plans are important to achieving your vision, but they can't be set in stone either. The pandemic was an unforeseen situation that took all companies in the world by surprise. Consequently, it is important to be ready to turn, change course quickly, and try to affect the entire organization as little as possible. ... People are inherently social creatures. It should come as no surprise then that we long to feel connected to the people we spend most of our time with. So how can we, as business leaders, help these connections occur between employees? Gregg Lederman is a bestselling author focused on employee interaction. After a long investigation he discovered 3 things that people need at work to feel completely fulfilled: The Need for Recognition: People have a need to be recognized for the skill and perspective they bring and for the challenges they have accomplished; The need for respect: People want to be respected for who they are as individuals and professionals and how they contribute to the team; The need for relationships: People want satisfying relationships with the people they work with.


Encrypted & Fileless Malware Sees Big Growth

“This malware family uses PowerShell tools to exploit various vulnerabilities in Windows,” according to the firm. “But what makes it especially interesting is its evasive technique. WatchGuard found that AMSI.Disable.A wields code capable of disabling the Antimalware Scan Interface (AMSI) in PowerShell, allowing it to bypass script security checks with its malware payload undetected.” ... In just the first six months of 2021, malware detections originating from scripting engines like PowerShell had already reached 80 percent of last year’s total script-initiated attack volume. At its current rate, 2021 fileless malware detections are on track to double in volume year over year. “Malicious PowerShell scripts have been known to hide in the memory of the computer and already use legitimate tools, binaries and libraries that come installed on most Windows systems,” explained the report. “That is why attackers have increased their use of this technique, called living off the land (LotL) attacks. Using these methods, a vaporworm might make its script invisible to many antivirus systems that don’t inspect the scripts or systems’ memory.”


What if Chrome broke features of the web and Google forgot to tell anyone?

Earlier this year Chrome developers decided that the browser should no longer support JavaScript dialogs and alert windows when they're called by third-party iframes. That means that if something is embedded from another website, let's say a YouTube video, Chrome wants to stop allowing that embedded content to call the JavaScript alert function, which opens a small alert window. Eventually Chrome aims to get rid of alert windows altogether. So what happens when Chrome does this? At first nothing because it's an obscure entry in a bug tracker – CC'd to the Web Hypertext Application Technology Working Group (WHATWG) – that Chromium and other browser engineers read. ... You know what isn't happening here? No substantial public discussion happens, certainly not with builders of websites. Google puts its idea forward as bug reports, some folks at Apple working on WebKit and at Mozilla working on Firefox are invited to agree with it in a WHATWG GitHub thread and Bugzilla discussion, and they do. Google gets what it wants and the web breaks.


The Shortfalls of Mean Time Metrics in Cybersecurity

As a measurement standard, mean times are a legacy paradigm brought over from call centers many eons ago. Over the years, cybersecurity leaders adopted similar metrics because IT departments were familiar with them. In today's reality, mean times don't map directly to the type of work we do in cybersecurity, and we can't entirely generalize them to be meaningful indicators across the attack lifecycle. While these averages might convey speed relative to specific parts of the attack lifecycle, they don't provide any actionable information other than potentially telling you to hurry up. In the best-case scenario, MTTX becomes a vanity metric that looks great on an executive dashboard but provides little actual business intelligence. ... The fastest MTTX is not worth anything if it measures the creation of an inaccurate alert. We want mean time metrics to tell us about actual alerts, or true positives and not be skewed by bad data. So, you might be thinking, "how does an untuned MTTX tell you about the quality of work your security provider does, or how safe it makes your systems?" 


How Non-Fungible Tokens Work: NFTs Explained, Debunked, and Legitimized

In a real marketplace, even if the property is intellectual property (such as a patent or copyright, whose form can be entirely digital), there will likewise need to be a contractual transfer of the rights to that intellectual property to a new party, with the transfer again having the full endorsement and power of law behind it. For instance, if in making an intellectual property purchase, I acquire the copyright to a picture, even a digital picture, the real market that operates in our society ensures that the transfer is subject to its laws and strictures. Through my purchase, I will own the picture in a real sense and can take legal action against anyone who tries to infringe on my copyright (such as by posting it on a blog without my permission). By contrast, the concept of owning an NFT on a blockchain is specific to the blockchain with no legal force in the society at large. Suppose I snap a digital photo. Because I’m the one who snapped the photo, US law agrees that I own the copyright to it. 


WebAssembly: The Future of Cloud Native Distributed Computing

In its own right, WebAssembly brings new capabilities and additional security features to modern development environments — both in the browser and with cloud native. However, modern cloud native developers are confronted with new challenges, such as CPU diversity, multiple operating environments, security, distributed application architecture, and scalability, that transcend deployments into a single public cloud provider. To understand the modern distributed computing environment, one must consider the rising diversity inside the public cloud, where we see new ARM CPUs challenging the historical dominance of the x86 chipsets, competing on both cost and performance. Traditional enterprise systems typically compile software to a specific development environment including a CPU and an operating system, such as Linux-32 bit, MacOS-ARM64, or Windows-64bit. Looking past the public cloud towards the edge, we find an even more diverse range of execution environments on an assorted set of CPU architectures.


Post-Quantum: Bi-Symmetric Hybrid Encryption System

A significant difference from commonly employed asymmetric encryption is that during the initial handshake to set up communication, no vulnerable data are exchanged. Should the sender key communication be intercepted by a hacker, they still cannot pretend to be the originator of the communication to the receiver. The encryption itself is achieved by randomly generating keys and interweaving them with portions of unencrypted data to be transmitted, applied to single bytes of data rather than long byte collections. During the initial handshake, private keys are generated from or found in the form of login credentials, credit card information, biometric data, or other personal credential information or pre-shared private keys. The private keys are used to start the handshake and are never actually transmitted. Randomly generated data in the form of challenge codes, counter challenge codes and session keys are exchanged during the handshake. This allows for the client and server to ascertain that the communicator, at the other end, are who they say they are.



Quote for the day:

"Leaders who won't own failures become failures." -- Orrin Woodward