Daily Tech Digest - September 21, 2022

IT Talent Crunch Shifts Tech Investment Strategies

Prasad Ramakrishnan, CIO at Freshworks, points out that low- and no-code tools enable businesses to do more with less, and the easy-to-use, configuration-based user experience of these tools means anyone can use them. He adds tech stacks have become bloated and complex, with features end users typically don't care about. “In an attempt to check every box, technology went from being purpose-built, to tailored to no one,” he says. “The pandemic has made this trend more pronounced.” Ramakrishnan conducts an “app rationalization” exercise regularly with his team, evaluating software applications in terms of integrations needed, their security, whether they are being used (to retire if needed) and how much they are being used (to reduce licenses if needed). “Constantly audit your tech stack,” he advises. “We also involve the end user to make sure everyone is part of the process, akin to a democratized process.” From his perspective, leaders need to create space for end-user feedback -- without it, companies could be taking away valuable tools that employees use and leave them with bloated applications they never use.


Why Investors & Founders Need To Embed Corporate Governance

There have been numerous tweets and posts about governance, the blame game, and other topics. Governance, in my opinion, begins with the founders and senior management. The investors/board have no way of knowing about fraud or any of the aforementioned issues because they are not involved in the day-to-day operations. However, once discovered, the board of directors and investors are responsible for resolution. Consider the case of a company in the news: many prominent Sillicon Valley and New York-based investors participated despite the fact that one of the cofounders was convicted of identity theft. If they believe in second chances, why not make this cofounder a full-fledged director of the company? There is also the role of regulatory bodies such as the RBI, given that some of these startups (particularly fintech) are governed by them because they have a stake in a bank. Laws and regulations that encourage collaboration to ensure there is no “conflict” or, for example, our regulations make it impossible for investors to liquidate and take their money back.


Introduction to SOLID Principles of Software Architecture

Per the Single Responsibility Principle, every class should not have more than one responsibility, (i.e., it should have one and only one purpose). If you have multiple responsibilities, the functionality of the class should be split into multiple classes, with each of them handling a specific responsibility. ... When classes are open for extension but closed for modification, developers can extend the functionality of a class without having to modify the existing code in that class. In other words, programmers should make sure their code can handle new requirements without compromising on the existing functionality. Bertrand Meyer is credited with introducing this principle in his book entitled “Object-Oriented Software Construction.” According to Meyer, “a software entity should be open for extension but closed for modification.” The idea behind this principle is that it allows developers to extend software functionality while preserving the existing functionality. In practical terms, this means that new functionality should be added by extending the code of an existing class rather than by modifying the code of that class.


The Uber Hack’s Devastation Is Just Starting to Reveal Itself

“It’s disheartening, and Uber is definitely not the only company that this approach would work against,” says offensive security engineer Cedric Owens of the phishing and social engineering tactics the hacker claimed to use to breach the company. “The techniques mentioned in this hack so far are pretty similar to what a lot of red teamers, myself included, have used in the past. So, unfortunately, these types of breaches no longer surprise me.” The attacker, who could not be reached by WIRED for comment, claims that they first gained access to company systems by targeting an individual employee and repeatedly sending them multifactor authentication login notifications. After more than an hour, the attacker claims, they contacted the same target on WhatsApp pretending to be an Uber IT person and saying that the MFA notifications would stop once the target approved the login. Such attacks, sometimes known as “MFA fatigue” or “exhaustion” attacks, take advantage of authentication systems in which account owners simply have to approve a login through a push notification on their device rather than through other means, such as providing a randomly generated code. 


Does your password policy align with NIST recommendations?

“NIST outlines several simple steps to strengthen passwords against modern password-based attacks. Organizations that ignore NIST’s recommendations are leaving an essential authentication security layer vulnerable,” notes Josh Horwitz, chief operating officer at Enzoic. ... As hacking threats increase and many IT teams are understaffed, upgrading your password policy may seem like a nice-to-have. However, password hardening is easy to do, leverages the existing investment in passwords and, unlike most security policies, actually makes things easier for users and administrators. The right solution reduces user frustration around frequent required resets and complex rules. Technology can also lower administrative burden and spend by using automation to reduce password reset calls and boost cybersecurity. Adopting modern technology such as Enzoic for Active Directory can help you avoid security breaches, prevent ransomware attacks and avoid account takeovers. “Organizations need a way to identify when passwords become compromised,” says Horwitz, adding, “Otherwise, their users and administrators can’t follow or enforce the NIST requirement to not reuse compromised passwords.”


Cybersecurity as an employee benefit

Many business leaders and human resources professionals believe that cybersecurity is the responsibility of their information technology staff and managed services provider. However, ensuring that employees and their families have appropriate cybersecurity protection is an employee benefit that benefits employers as well. Mistakes, lack of awareness and general vulnerability of employees remains the most significant cyber security risk for most employers. Simply training employees about cyber threats typically fails to reduce that risk sufficiently. To have a truly cyber-mature workforce, employers need to engage employees in cybersecurity. Teaching employees about the threats to themselves and their families, and making personal protection services available to them, is a much better method to engage employees in cybersecurity. Cybersecurity training is not most people’s idea of a good time. However, employees sit up and take notice when trainers talk to them about the prevalence and severity of the cyber threats to themselves personally, including their identities, credit files, financial accounts, personal devices and home networks.


Meta, TikTok, YouTube and Twitter dodge questions on social media and national security

Whistleblowers and industry have repeatedly raised alarms about inadequate content moderation in other languages, an issue that gets inadequate attention due to a bias toward English language concerns, both at the companies themselves and at U.S.-focused media outlets. In a different hearing yesterday, Twitter’s former security lead turned whistleblower Peiter “Mudge” Zatko noted that half of the content flagged for review on the platform is in a language the company doesn’t support. Facebook whistleblower Frances Haugen has also repeatedly called attention to the same issue, observing that the company devotes 87% of its misinformation spending to English language moderation even though only 9% of the platform’s users speak English. In another eyebrow-raising exchange, Twitter’s Jay Sullivan declined to specifically deny accusations that the company “willfully misrepresented” information given to the FTC. “I can tell you, Twitter disputes the allegations,” Sullivan said, referring to testimony from the Twitter whistleblower on Tuesday.


5 steps to designing an embedded software architecture, Step 1

First, they are not very portable. For example, what happens if a microcontroller suddenly becomes unavailable? (Chip shortage, anyone?). If the code is tightly coupled, attempting to move the application code to run on a new microcontroller becomes a herculean effort. Application code is tightly coupled to low-level hardware calls on the microcontroller! I know a lot of companies who have suffered through this recently. If they didn’t update their architecture, they had to go back through all their code and change every line that interacted with the hardware. The companies that updated their architecture broke their architecture coupling through abstractions and dependency injection. Second, unit testing the application in a development environment rather than on the target hardware is nearly impossible. If the application code makes direct calls to the hardware, a lot of work will go into the test harness to successfully run that test, or the testing will need to be done on the hardware. Testing on hardware is slow and is often a manual rather than an automated process. 


The promise of sustainable AI may not outweigh the organizational challenges

Without help from technology, outlining sustainability goals would be a limiting and difficult exercise. Enterprises today struggle with quantifying the risk of climate change, especially when it comes to digital transformation. In fact, only 43% of global executives say they are aware of their organization’s IT footprint. Data analytics and AI offer a solution to this challenge, as they provide meaningful insights across industries to understand where those gaps exist and thus can help companies incorporate more sustainable practices. Research shows that 89% of organizations recycle less than 10% of their IT hardware. However, if a company is to truly reap all the environmental benefits of sustainable AI, IT must play a crucial role in using this technology as the organization’s biggest helper, not its adversary. There are four broad areas that offset the sustainability impact of AI machinery and technology: reporting, cloud, circular economy, and coding. Accurate metrics and reporting will keep the AI systems intact and constantly improving, while cloud promotes sustainability because users only pay for the infrastructure per use, eliminating the need to run data centers at full threshold.


Measuring performance in agile

It’s really easy to destroy the culture of an agile team with metrics. We need to be sure that what we measure encourages the right behaviour. Using a team’s velocity as a performance measurement comes with a strong warning label: “Scrum’s team-level velocity measure is not all that meaningful outside of the context of a particular team. Managers should never attempt to compare velocities of different teams or aggregate estimates across teams. Unfortunately, we have seen team velocity used as a measure to compare productivity between teams, a task for which it is neither designed nor suited. Such an approach may lead teams to “game” the metric, and even to stop collaborating effectively with each other. In any case, it doesn’t matter how many stories we complete if we don’t achieve the business outcomes we set out to achieve in the form of program-level target conditions” We’ve all heard about working smarter, not harder, yet by focusing on story points as a measurement, we find that although in the short term we will succeed at getting people to complete more story points by simply working harder, this approach will not necessarily achieve the outcomes that we want.



Quote for the day:

"Nobody in your organization will be able to sustain a level of motivation higher than you have as their leader." -- Danny Cox

Daily Tech Digest - September 19, 2022

10 mistakes rookie CIOs make — and how to avoid them

Most CIOs have likely heard that “culture eats strategy for breakfast,” the famous quote from management guru Peter Drucker. But rookie CIOs don’t often take that message to heart, according to both researchers and experienced CIOs. “One of the rookie mistakes is not truly understanding your business, culture, and organizational fabric,” says Richard A. Hook, executive vice president and CIO of Penske Automotive Group and CIO of Penske. “Everyone is focused on their 100-day plan, but the reality is the pace of that plan and composite will vary between organizations. Get to know your peers, their teams, your team, and the overall organization before taking a too-aggressive approach. In the end, organizations win with the best people, be sure you know your teams and deeply understand the business before acting too harshly.” Jackson agrees, saying new executives should assess their department’s culture and the organization’s overall culture early on. This, he explains, lets leaders know how to adjust and change so they can be most effective moving forward.


EU Cyber Resilience Act sets global standard for connected products

The EC said the new rules would rebalance security responsibility towards manufacturers who will be made to ensure they conform to the new requirements, ultimately benefiting end-users across the EU by enhancing transparency, promoting trust, and ensuring better protection of basic rights to privacy. The EC acknowledged the act is likely to become an international point of reference beyond the EU’s internal market, and Keiron Holyome, BlackBerry vice-president for the UK and Ireland, Eastern Europe, Middle East and Africa agreed with this view. “Today, as the EU launches its Cyber Resilience Act to protect European consumers and businesses from the risks caused by insecure digital products, the UK must sit up and take notice. This act should not be viewed as a European requirement, but in fact a new global standard,” said Holyome. “The EU’s new act further highlights that British organisations must take action, particularly when it comes to the use of potentially insecure smart devices for home working. ... Although smart devices may seem innocent, bad actors can easily access home networks with connections to company devices – or company data on consumer devices – and steal intellectual property worth millions.


Hybrid workers don't want to return to the office. But soon, they might have to

No doubt many leaders will be paying attention to how major tech companies are reacting to the situation. Apple, for instance, has laid off a number of recruiters and plans to curb hiring next year to help it weather an uncertain economic climate. Meta, Microsoft and Google have also announced plans to slow hiring, and all four tech giants have made moves to get their workers into the office on a more regular basis in recent months. Asking employees to return to the office as a reaction to financial uncertainty feels more like a return to what feels familiar than a practicable way of overcoming the challenges ahead. While doing so might help leaders regain a sense of control and run the business as a much tighter ship, it's not necessarily going to help improve productivity or engagement. ONS data suggests that 78% of employees who work from home in some capacity report a better work-life balance, and taking this away will not win employers any favours. Workers might also choose to return to the office if working from home gets significantly more expensive.


When openness doesn’t matter

“Open is better…unless it isn’t,” notes software exec James Urquhart, who has done his share of work with open source companies. The key to figuring out the “isn’t” in a particular case is to look at the practical effects of a given strategy. Lightbend and Akka founder Jonas Bonér stressed that the company’s decision to change the Akka license was because the current model simply wasn’t sustainable. He says, “With Akka now considered critical infrastructure for many large organizations, the Apache 2.0 model becomes increasingly risky when a small company solely carries the maintenance effort.” To prod these large organizations to pay for their use of Akka, the company turned to the BSL 1.1 as “a form of productive and sustainable open source” that is “easy to understand, provides clear rules, and is enforceable.” Not everyone will like it. Some of the more vocal members of the open source Illuminati have castigated Lightbend for this decision. But rather than criticize, why not simply observe? If it’s truly a bad strategy, it will fail, and both Lightbend and other companies will learn from that failure, and there will be less re-licensing with licenses that are perceived to be less open.


Hacker Accessed LastPass Internal System for 4 Days

The breach investigation was carried out in partnership with cybersecurity firm Mandiant and uncovered that the threat actor's activity was limited to a four-day period until the incident was contained. Further investigation from LastPass and Mandiant determined that the threat actors gained access to the development environment using a developer's compromised endpoint. "While the method used for the initial endpoint compromise is inconclusive, the threat actor utilized their persistent access to impersonate the developer once the developer had successfully authenticated using multifactor authentication," Toubba says. Toubba acknowledges that the threat actor was able to access the development environment but failed to access any customer data or encrypted password vaults. Toubba also says that the LastPass development environment is physically separated from other environments, including the production area, and has no customer data or encrypted vaults. The notification also says that the company does not have access to the master passwords used by the customers, and without having the master password, no one can decrypt vault data as part of the company's "zero-knowledge security model."


Double-transmon coupler will realize faster, more accurate superconducting quantum computers

Toshiba has recently devised a double-transmon coupler that can completely turn on and off the coupling between qubits with significantly different frequencies. Completely turning on enables high-speed quantum computations with strong coupling, while completely turning off eliminates residual coupling, which improves quantum computation speeds and accuracy. Simulations with the new technology have shown it realizes two-qubit gates, basic operations in quantum computation, with an accuracy of 99.99% and a processing time of only 24 ns. Toshiba's double-transmon coupler can be applied to fixed-frequency transmon qubits, realizing high stability and ease of design. It is the first to realize coupling between fixed-frequency transmon qubits with significantly different frequencies that can be completely switched on and off, and to deliver a high-speed, accurate two-qubit gate. The technology is expected to advance the realization of higher-performance quantum computers that will contribute in such areas as the achievement of carbon neutrality and the development of new drugs.


Automation Gains a Foothold, But How to Scale It Is the Challenge

“Going forward, automation should be the focus at each business group and department – it must be a mandatory part of business planning,” he explains. Each C-level executive should provide a plan of how much automation each quarter/year they plan to implement to reduce the number of resources their division needs. They should also come up with tangible KPIs that will impact cost reduction and generate savings for sustained growth. Butterfield says this focus must come from the board as a priority item, enabled through technology and implemented by all. “AI and automation are as much a capability as a technology – therefore, even if someone is taking responsibility, the organization will only be successful if everyone is aligned,” he says. Freund says that while automation involves a broad set of stakeholders across both business and IT, it’s not always easy to get them all on the same page. Depending on the organization, a technical leader like an enterprise architect might spearhead the automation process by kicking off a proof of concept (PoC), organizing a team to execute it, and presenting the results to business stakeholders.


Computational Aesthetics in Robotics Design and Automation

Robotic informatics studies how robots interact with their surroundings and how this affects the aesthetics of their design. It considers such questions as how to create visually appealing robots and how to ensure that they behave in an aesthetically pleasing way. Though research is still ongoing in this area, there are already some promising results. For example, a study shows that people respond more favorably to visually engaging robots with well-designed features. That may help increase efficiency and productivity in the industry while creating a more positive image for robotics technology. One of the main ways computational aesthetics impacts robotics programming is by providing new methods for designing and improving robots. Such new methods are artificial intelligence (AI), machine learning (ML), and deep learning (DL). All these methods involve teaching computers to do things that were once impossible for them. ML, in particular, is a form of AI that allows computers to learn from experience and improve their performance over time based on this experience. DL is similar to ML.


6 tips for successfully leading software developers

Developers’ will to create is strong, but it can be hard to perceive as creativity is often obscured by the technological nature of development. Developers communicate with a strange patois of acronyms that hide the artistic spirit behind it. Learning to perceive and nurture that spirit is a special kind of leadership that developers will appreciate. Just the awareness of the creative life of developers is important. Not only will it help to understand where they are coming from, but it will lead to policies and decisions that support that creativity and out of that will come real bottom-line benefit. The space and time to innovate will lead to better software that handles the vicissitudes of business. You need the human creativity of your developers captured in the half-machine/half-thought medium of code to be agile. Perhaps the most important feature for the leader to bear in mind here is in realizing the attachment that developers have to their work. Affection might be a better word than attachment. Building a thing that feels beautiful and worthy in itself has its own momentum. 


Are we experiencing cloudflation?

Many critics cite the lack of a sound cloud finops program to monitor, track, and govern costs. The rudimentary problem right now is that companies have little or no insight into any cloud costs before they get the bill. In other words, if having a finops program scores a 9 out of 10 in terms of cloud cost management maturity, these companies are still at a 1 or 2. This state exists because most enterprises did not see cloud coming—or coming as rapidly as it did due to the pandemic. As a result, they did not allocate budget and resources to manage cloud costs: the hard costs such as cloud computing bills for services, as well as soft costs such as the many expensive humans now needed to keep cloud-based systems running. Here’s the good news: Implementing even a rudimentary finops strategy with cloud cost monitoring and controls will quickly pay for itself. Moreover, it will do so without diminishing cloud services. It accomplishes noninvasive cost savings partly by implementing basic housekeeping tasks, such as shutting down unused instances where the meter is still running or optimizing the use of cloud resources that lack current cost management, with options to automate as deeply as desired or required.



Quote for the day:

"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher

Daily Tech Digest - September 18, 2022

5 ways to secure devops

Devops workflows are designed for speed and rapidly iterating with the latest requirements and performance improvements. Gate reviews are static. The tools devops teams rely on for security testing can lead to roadblocks, given their gate-driven design. Devops is a continuous process in high-performance IT teams, while stage gates slow the pace of development. Devops leaders often don’t have the time to train their developers to integrate security from the initial phases of a project. The challenge is how few developers are trained on secure coding techniques. Forrester’s latest report on improving code security from devops teams looked at the top 50 undergraduate computer science programs in the US, as ranked by US News and World Report for 2022, and found that none require secure coding or a secure application design class. CIOs and their teams are stretched thin with the many digital transformation initiatives, support for virtual teams and ongoing infrastructure support projects they have going on concurrently. CIOs and CISOs also face the challenges of keeping their organizations in regulatory compliance with more complex audit and reporting requirements. 


Designing APIs for humans: Error messages

The status code of the response should already tell you if an error happened or not, the message needs to elaborate so you can actually fix the problem. It might be tempting to have deliberately obtuse messages as a way of obscuring any details of your inner systems from the end user; however, remember who your audience is. APIs are for developers and they will want to know exactly what went wrong. It’s up to these developers to display an error message, if any, to the end user. Getting an “An error occurred” message can be acceptable if you’re the end user yourself since you’re not the one expected to debug the problem (although it’s still frustrating). As a developer there’s nothing more frustrating than something breaking and the API not having the common decency to tell you what broke. ... Letting you know what the error was is the bare minimum, but what a developer really wants to know is how to fix it. A “helpful” API wants to work with the developer by removing any barriers or obstacles to solving the problem. The message “Customer not found” gives us some clues as to what went wrong, but as API designers we know that we could be giving so much more information here.


Arm Neoverse roadmap targets enterprise infrastructure, cloud

"Compute workloads are on a relentless march higher, and becoming more complex," said Chris Bergey, senior vice president and general manager of Arm's infrastructure line of business, at a press briefing. "Machine learning and AI are taking over the future, and so infrastructure will look nothing like the past." Over the next year, Arm will work closely with its cloud and software partners to optimize cloud-native software infrastructure, frameworks and workloads. These partnerships include contributions to projects including Kubernetes and Istio, along with several CI/CD tools used for creating cloud-native software for the Arm architecture. Arm will also work to improve machine learning frameworks such as TensorFlow and a number of workloads such as big data, analytics and media processing. The company is moving into more traditional enterprise spaces now, Bergey said, noting the work it has done with VMware on its Project Monterey and providing support for Red Hat's OpenShift and SAP's HANA. "These cloud providers all use GPUs to underpin their cloud workloads, and the majority of them are using Arm," Bergey said.


How quantum physicists are looking for life on exoplanets

So, some of the biggest things in the universe are certainly quantum mechanical, including supermassive blackholes which can lose energy through a quantum phenomenon known as Hawking radiation. The second point is one often thinks quantum deals with very low temperatures. Again, to take our sun as an example—it's very hot, but that's quantum mechanical. Low temperature doesn't serve as a requirement for quantum. This example of a star and the quantumness of the fusion process and the high temperatures associated with that—I just want to broaden the view of what quantum mechanics is and how ubiquitous it is. ... It's quite amazing that we can determine what is in these planets' atmospheres—planets that would be impossible for humans to ever visit. That, and we can look for signatures of life, like, are there molecules that we associate with life floating around in these planets, at least if it's Earth-like life; then we might be able to determine with some probability that some planet way out there that no human could ever visit, harbors life. Or maybe we could discover other candidate forms of life.


How Is Platform Engineering Different from DevOps and SRE?

Over time, thought leaders came up with different metrics for organizations to gauge the success of their DevOps setup. The DevOps bible, “Accelerate,” established lead time, deployment frequency, change failure rate and mean time to recovery (MTTR) as standard metrics. Reports like the State of DevOps from Puppet and Humanitec’s DevOps benchmarking study used these metrics to compare top-performing organizations to low-performing organizations and deduce which practices contribute most to their degree of success. DevOps unlocked new levels of productivity and efficiency for some software engineering teams. But for many organizations, DevOps adoption fell short of their lofty expectations. Manuel Pais and Matthew Skelton documented these anti-patterns in their book “DevOps Topologies.” In one scenario, an organization tries to implement true DevOps and removes dedicated operations roles. Developers are now responsible for infrastructure, managing environments, monitoring, etc., in addition to their previous workload. Often senior developers bear the brunt of this shift, either by doing the work themselves or by assisting their junior colleagues.


The Cyber Security Head Game

Just as the predators of the fish below are never going to go away (which is why this fish camoflages itself and sports huge fake eyes to scare predators), cyber predators also will never go away. And the best of these cyber predators will continue to penetrate even the strongest defenses, because the exponential increase in IT system complexity, which makes it increasingly difficult to even understand the full extent of what you're defending, favors cyber attackers over cyber defenders. So we need to assume that some hackers will inevitably get inside our networks and thus we must adopt strategies of deception, similar to those employed successfully by our fish here, to lessen the harm from competent hackers, who manage to get up close and personal. We also need to create doubt in hackers’ minds, about the benefits of attacking us in the first place, in the same way that the poisonous Cane toad avoids attacks from predators who know the toad’s skin has lethal poison glands, and milk snakes, who have no poison, but discourage would-be predators by mimicking the coloration of coral snakes, who definitely do have deadly venom.


US Cyber-Defense Agency Urges Companies to Automate Threat Testing

Automated threat testing is still not very widespread, according to the official, who added that organizations sometimes don’t really follow through after deploying expensive tools on their network and instead just assume they’re doing the job. Automating security controls will make it easier to stop attackers from relying on established tactics. The top threat actors are still going back and leveraging vulnerabilities that are up to 10 years and older, warned the CISA official. CISA is making the recommendation in collaboration with the Center for Threat-Informed Defense, a 29-member nonprofit formed in 2019 that draws on MITRE’s framework. Iman Ghanizada, global head of autonomic security operations at Google Cloud, a research sponsor of the Center, said automated testing is important for creating continuous feedback loops that can steadily improve protection. “Whether you are a large company or a startup, you have to have visibility, analytics, response and continuous feedback,” he said.


Smart Cities: Mobility ecosystems for a more sustainable future

Although every city is different, leading cities are becoming smarter through their participation in large, complex, digitally enabled ecosystems. The question for many urban leaders, however, is how to engage with them effectively. Our experience in working with large transportation and communications clients yields a multilayered model and approach to guide the design and management of urban mobility systems. Given the interconnected nature of the building blocks of mobility, each layer—demand, supply, and foundational—is critical. Cities must understand and manage all the interactions and interdependencies. For example, demand for different forms of transportation is enabled via available modes of transit and supporting infrastructure. None of these would be possible without regulations, financing, insurance, and innovation. ... To achieve its vision of becoming a 45-minute city, Singapore is focusing on building its infrastructure (e.g., it is building intermodal mobility hubs to allow commuters to move seamlessly from one mode of transportation to another). The city is developing a robust innovation ecosystem, collaborating with many private-sector players. 


How to Draw and Retain Top Talent in Cyber Security

Before you introduce policies to increase diversity, you need to know who is currently applying. Gather data on applicants to establish if you need to take proactive steps to attract specific groups – you can’t make rational business decisions without data. Analyze job descriptions to eliminate bias so you aren’t deterring anyone. Review the language -- are you unconsciously drafting job advertisements and application forms with a white male in mind? Consider a post-application survey so you can establish what is appealing to recruits and what might cause them to drop out. You’ll be surprised how many people want to share their feedback because a negative job application process can deter an applicant for good, and you could be missing out on the best talent through ignorance. We implemented an Applicant Tracking System to understand the sources our candidates are coming from, see how diverse the candidate pool is (or not), and improve the candidate experience by being able to track how their process progresses and ends. ... Once you’ve got these cyber professionals on board, you need to keep them. 


Why shift left is burdening your dev teams

Security and compliance challenges are a significant barrier to most organizations’ innovation strategies, according to CloudBees. The survey also reveals agreement among C-suite executives that a shift left security strategy is a burden on dev teams. 76% of C-suite executives say that compliance challenges and security challenges (75%) limit their company’s ability to innovate. This is due, in part, to the significant time spent on compliance audits, risks, and defects. At the same time, C-suite executives overwhelmingly favor a shift left approach, a strategy of moving software testing and evaluation to earlier in the development lifecycle, placing the burden of compliance on development teams. In fact, 83% of C-suite executives say the approach is important for them as an organization, and 77% say they are currently implementing a shift left security and compliance approach. This is despite 58% of C-suite executives reporting that shift left is a burden on their developers. “These survey findings underscore the urgent need to transform the software security and compliance landscape. 



Quote for the day:

"Courage is the ability to execute tasks and assignments without fear or intimidation." -- Jaachynma N.E. Agu

Daily Tech Digest - September 16, 2022

The AI-First Future of Open Source Data

If we take it one step further from the GPL for data, we begin to see the value equation of data, or “the data-in-to-data-out ratio” as Augustin calls it. He uses the example of why people are so willing to give up parts of their data and privacy to websites because the small amount of data they’re handing over returns greater value back to them. Augustin sees the data-in-to-data-out ratio as a tipping point in open source data. Calling it one of his application principles, Augustin suggests that data engineers should focus on providing users with more value but take less and less information from them. He also wants to figure out a way never to ask your users for anything. You’re only providing them an advantage. For example, new app users will always be asked for information. But how can we skip that step and collect data directly in exchange for providing value? “Most people are willing to [give up data] because they get a lot of utility back. Think about the ratio of how much you put in versus how much you get back. You get back an awful lot. People are willing to give up so much of their personal information because they get a lot back,” he says.


How User Interface Testing Can Fit into the CI/CD Pipeline

Reliance on manual testing is why organizations can’t successfully implement CI/CD. If CI/CD involves manual processes that cannot be sustained as it slows down the entire delivery cycle. Testing is no longer the sole responsibility of developers or testers only and it takes investment and integration in infrastructure. Developer teams need to focus on building the coverage that is essential. They should focus on testing workflows and not features to be more efficient. Additionally, manual testers who are not developers can still be part of the process, provided that they use a testing tool that gives them the required automation capabilities in a low code environment. For example, with Telerik Test Studio, a manual tester can create an automated test by interacting with the application’s UI in a browser. That test can be presented in a way without code and as a result, they can learn how the code behaves. Another best practice in making UI testing efficient is to be selective with what is included in the pipeline. 


Want to change a dysfunctional culture? Intel’s Israel Development Center shows how

Intel’s secret weapon, one that until recently it did not talk about much, is its Israel Development Center. It is the largest employer in Israel, a nation surrounded by hostile countries, and women and men are treated more equally than in most other countries I’ve studied. They are highly supportive of each other, making it an incredibly supportive country for women in a wide variety of industries. The facility itself is impressively large and well-built and eclipses Intel’s corporate office in both size and security. The work done there really defines Intel’s historic success in both product performance and quality, making it an example of how a company should be run. Surprisingly, the collaborative and supportive country culture overrode the hostile and self-destructive corporate culture that has defined the US tech industry. What Gelsinger has done is showcase the development center as a template for the rest of Intel, as a firm more tolerant of failure, more supportive of women and focused like a laser on product quality, performance and caring for Intel’s customers.


Uber security breach 'looks bad', potentially compromising all systems

While it was unclear what data the ride-sharing company retained, he noted that whatever it had most likely could be accessed by the hacker, including trip history and addresses. Given that everything had been compromised, he added that there also was no way for Uber to confirm if data had been accessed or altered since the hackers had access to logging systems. This meant they could delete or alter access logs, he said. In the 2016 breach, hackers infiltrated a private GitHub repository used by Uber software engineers and gained access to an AWS account that managed tasks handled by the ride-sharing service. It compromised data of 57 million Uber accounts worldwide, with hackers gaining access to names, email addresses, and phone numbers. Some 7 million drivers also were affected, including details of more than 600,000 driver licenses. Uber later was found to have concealed the breach for more than a year, even resorting to paying off hackers to delete the information and keep details of the breach quiet.


What Is GPS L5, and How Does It Improve GPS Accuracy?

L5 is the most advanced GPS signal available for civilian use. Although it’s primarily meant for life-critical and high-performance applications, such as helping aircraft navigate, it’s available for everyone, like the L1 signal. So the manufacturers of mass-market consumer devices such as smartphones, fitness trackers, in-car navigation systems, and smartwatches are integrating it into their devices to offer the best possible GPS experience. One of the key advantages that the L5 signal possesses is that it uses the 1176.45MHz radio frequency, which is reserved for aeronautical navigation worldwide. As such, it doesn’t have to worry about interference from any other radio wave traffic in this frequency, such as television broadcasts, radars, and any ground-based navigation aids. With L5 data, your device can access more advanced methods to determine which signals have less error and effectively pinpoint the location. It’s particularly helpful at areas where GPS signal can be received but is severely degraded.


Digital transformation: How to get buy-in

Today’s IT leader has to be much more than tech-savvy, they have to be business-savvy. IT leaders of today are expected to identify and build support for transformational growth, even when it’s not popular. At Clarios, I included “Challenge the Status Quo, Be a Respectful Activist” to our IT guiding principles, knowing that around any CEO or general manager’s table they need one or two disruptors – IT leaders should be one. However, once that activist IT leader sells their vision to the boss, now they have to drive change in their peers and the entire organization, without formal authority. ... Our IT leaders can gain buy-in on new ideas by actively listening to our business partners. Our focus is to understand from their perspective the challenges impeding their work by rounding in our hospital locations to see first-hand the issues. So when we propose solutions, it is from their perspective. Utilizing these practices, we can bring forth the vision of Marshfield Clinic Health System because we can implement technology that bridges human interaction between our patients and care teams, which is at the heart of healthcare.


How to Prepare for New PCI DSS 4.0 Requirements

There are several impactful changes to the requirements associated with DSS v4.0 compliance, ranging from policy development (all changes will require some level of policy changes), to Public Key Infrastructure (PKI), as there will be multiple changes related to how keys and certificates are managed. Carroll points out there will also be remote access issues, including defined changes to how systems may be accessed remotely, and risk assessments -- now required to multiple and regular “targeted risk assessments” to capture risk in a format specified by the PCI DSS. Dan Stocker, director at Coalfire, a provider of cybersecurity advisory services, points out fintech is growing rapidly, with innovative uses for credit card data. “Entities should realistically evaluate their obligations under PCI," he says. “Use of descoping techniques, such as tokenization, can reduce total cost of compliance, but also limit product development choices.” He explains modern enterprises have multiple compliance obligations across diverse topics, such as financial reporting, privacy, and in the case of service providers, many more.


Building Large-Scale Real-Time JSON Applications

A critical part of building large-scale JSON applications is to ensure the JSON objects are organized efficiently in the database for optimal storage and access. Documents may be organized in the database in one or more dedicated sets (tables), over one or more namespaces (databases) to reflect ingest, access and removal patterns. Multiple documents may be grouped and stored in one record either in separate bins (columns) or as sub-documents in a container group document. Record keys are constructed as a combination of the collection-id and the group-id to provide fast logical access as well as group-oriented enumeration of documents. For example, the ticker data for a stock can be organized in multiple records with keys consisting of the stock symbol (collection-id) + date (group-id). Multiple documents can be accessed using either a scan with a filter expression (predicate), a query on a secondary index, or both. A filter expression consists of the values and properties of the elements in JSON. For example, an array larger than a certain size or value is present in a sub-tree. A secondary index defined on a basic or collection type provides fast value-based queries described below.


Digital self defense: Is privacy tech killing AI?

AI needs data. Lots of it. The more data you can feed a machine learning algorithm, the better it can spot patterns, make decisions, predict behaviours, personalise content, diagnose medical conditions, power smart everything, detect cyber threats and fraud; indeed, AI and data make for a happy partnership: “The algorithm without data is blind. Data without algorithms is dumb.” Even so, some digital self defense maybe in order. But AI is at risk. Not everyone wants to share, at least, not under the current rules of digital engagement. Some individuals disengage entirely, becoming digital hermits. Others proceed with caution, using privacy-enhancing technologies (PETs) to plug the digital leak: a kind karate chop, digital self defense — they don’t trust website privacy notices, they verify them with tools like DuckDuckGo’s Privacy Grade extension and soon, machine-readable privacy notices. They don’t tell companies their preferences; they enforce them with dedicated tools, and search anonymously using AI-powered privacy-protective search engines and browsers like Duck Duck Go, Brave and Firefox. 


Why Mutability Is Essential for Real-Time Data Analytics

At Facebook, we built an ML model that scanned all-new calendar events as they were created and stored them in the event database. Then, in real-time, an ML algorithm would inspect this event and decide whether it is spam. If it is categorized as spam, then the ML model code would insert a new field into that existing event record to mark it as spam. Because so many events were flagged and immediately taken down, the data had to be mutable for efficiency and speed. Many modern ML-serving systems have emulated our example and chosen mutable databases. This level of performance would have been impossible with immutable data. A database using copy-on-write would quickly get bogged down by the number of flagged events it would have to update. If the database stored the original events in Partition A and appended flagged events to Partition B, this would require additional query logic and processing power, as every query would have to merge relevant records from both partitions. Both workarounds would have created an intolerable delay for our Facebook users, heightened the risk of data errors, and created more work for developers and/or data engineers.



Quote for the day:

"Leadership and learning are indispensable to each other." -- John F. Kennedy

Daily Tech Digest - September 15, 2022

AI is playing a bigger role in cybersecurity, but the bad guys may benefit the most

“Security experts have noted that AI-generated phishing emails actually have higher rates of being opened — [for example] tricking possible victims to click on them and thus generate attacks — than manually crafted phishing emails,” Finch said. “AI can also be used to design malware that is constantly changing, to avoid detection by automated defensive tools.” Constantly changing malware signatures can help attackers evade static defenses such as firewalls and perimeter detection systems. Similarly, AI-powered malware can sit inside a system, collecting data and observing user behavior up until it’s ready to launch another phase of an attack or send out information it has collected with relatively low risk of detection. ... But Finch said, “Given the economics of cyberattacks — it’s generally easier and cheaper to launch attacks than to build effective defenses — I’d say AI will be on balance more hurtful than helpful. Caveat that, however, with the fact that really good AI is difficult to build and requires a lot of specially trained people to make it work well. Run of the mill criminals are not going to have access to the greatest AI minds in the world.”


Cybersecurity’s Too Important To Have A Dysfunctional Team

With such difficulty recruiting and maintaining staff, one option businesses should consider is training and reskilling programmes for existing staff to help bridge the gap. Current cybersecurity professionals can solidify what they already know and stay up to date on the latest learnings. Along with cybersecurity professionals, other technology professionals can be trained and recruited into these roles. Technology professionals are likely to have an affinity for the types of skills needed to succeed in cybersecurity. Non-technical people by background, may still be able to learn what is needed to perform in these roles, especially if businesses are willing to invest and cover the cost of the training. When there is a skills shortage, as is currently the case, and when vacancies outstrip the available talent, organisations need to be prepared to be imaginative in finding solutions. Alongside this, arming all teams, regardless of their skills and experience, with the right tools and support is essential. Working with knowledgeable and trusted partners can help outsource some of the work and offset any skills gaps as the external partner becomes an extension of the in-house team.


How Sweden goes about innovating

The innovation agency functions much like its counterparts in other countries, similarly to the Finnish Funding Agency for Technology and Innovation (Tekes) in neighbouring Finland, and to the part of the US National Science Foundation (NSF) that does seed funding on the other side of the Atlantic. The Swedish government gives Vinnova more than €300m each year to invest through grants to different kinds of actors, which might be small companies, research institutes, large competence centres, or consortia of companies working together on projects. Vinnova invests this money along 10 different themes, including sustainable industry and digital transformation. To report on the social and economic effects of its funding, the agency produces two impact studies annually. It has also published a document that describes its approach to tracking the impact of investments. “It’s never the case that we’re alone in the responsibility for success or failure,” says Göran Marklund, head of strategic intelligence and deputy director-general at Vinnova. 


Bringing AI to inventory optimization

Chasing today’s consumer patterns is a losing game, he believes. “It’s important to take a long-term view so that the next time the pattern shifts, you’ll be ready,” he said. The antuit.ai solution works by combining the historical data that supply chains have always used as well as new data becoming available, doing it at a scale perhaps not previously used, and then utilizing emerging technologies like AI and machine learning to process that data, make decisions and then learn from the execution of those decisions. “If I’m a retailer buying from CPG companies to service hundreds of stores, I have to make inventory decisions such as what port to land, what distribution centers to send it to, how to allocate it to the stores down to the shelf level and at what price to sell it,” Lakshmanan explained. “Part of my data equation is knowing what has historically sold, at what price, what promotions I ran, how much inventory did I have and whether there were any external factors, like was it raining. Now, if I know it’s going to rain next week, I have backward and forward-looking data that I can put through an algorithm to determine things like what is the likely demand at a store in Plano, Texas.”


Ambient computing has arrived: Here's what it looks like, in my house

Ambient computing is ignorable computing. It's there, but it's in the background, doing the job we've built it to do. One definition is a computer you use without knowing that you're using it. That's close to Eno's definition of his music -- ignorable and interesting. A lot of what we do with smart speakers is an introduction to ambient computing. It's not the complete ambient experience, as it relies on only your voice. But you're using a computer without sitting down at a keyboard, talking into thin air. Things get more interesting when that smart speaker becomes the interface to a smart home, where it can respond to queries and drive actions, turning on lights or changing the temperature in a room. But what if that speaker wasn't there at all, with control coming from a smart home that takes advantage of sensors to operate without any conscious interaction on your part? You walk into a room and the lights come on, because sensors detect your presence and because another set of sensors indicate that the current light level in the room is lower than your preferences.


Most enterprises looking to consolidate security vendors

Cost optimization should not be a driver, Gartner VP analyst John Watts said. Those looking at cutting costs must reduce products, licenses and features, or ultimately renegotiate contracts. A drawback of those pursuing consolidation has been a reduction of risk posture in 24% of cases, rather than an improvement. But if cost savings becomes a result of consolidation, CISOs can invest that on preventing attack surface expansion. “This trend captures a dramatic increase in attack surface emerging from changes in the use of digital systems, including new hybrid work, accelerating use of public cloud, more tightly interconnected supply chains, expansion of public-facing digital assets and greater use of operational technology (cyber physical systems—CPS). Security teams may need to expand licensing, add new features, or point solutions to address this trend,” Watts says to CSO. The time invested should also not be taken for granted. Gartner found that vendor consolidation can take a long time with nearly two-thirds of organizations saying they have been consolidating for three years.


Software-defined perimeter: What it is and how it works

An SDP is specifically designed to prevent infrastructure elements from being viewed externally. Hardware, such as routers, servers, printers, and virtually anything else connected to the enterprise network that are also linked to the internet are hidden from all unauthenticated and unauthorized users, regardless of whether the infrastructure is in the cloud or on-premises. "This keeps illegitimate users from accessing the network itself by authenticating first and allowing access second," says John Henley, principal consultant, cybersecurity, with technology research advisory firm ISG. "SDP not only authenticates the user, but also the device being used. When compared with traditional fixed-perimeter approaches such as firewalls, SDP provides greatly enhanced security. Because SDPs automatically limit authenticated users’ access to narrowly defined network segments, the rest of the network is protected should an authorized identity be compromised by an attacker. "This also offers protection against lateral attacks, since even if an attacker gained access, they would not be able to scan to locate other services," Skipper says.


Assessing the Security Risks of Emerging Tech in Healthcare

How some of these newer technologies are implemented into existing healthcare environments is also a critical security consideration, other experts say. "Smart hospitals have a blend of old technologies and newer innovations, improving the experience for both the patients and the clinicians," says Sri Bharadwaj, chief operating and information officer of Longevity Health Plan and chair-elect of the Association for Executives in Healthcare Information Security, a healthcare CISO professional organization. The key is to realize that legacy technology that is embedded in "newer shiny objects" still has the same security risks that have to be mitigated through strong administrative and technical controls to provide a robust complement to the newer technology, he says. ... "One thing to always keep in mind is that as security leaders our job is to perform due diligence and assess the risk of all services and technologies. We are also to find ways to help mitigate the risk, where possible, and raise the risk awareness to the organization," she says.


7 tell-tale signs of fake agile

When the focus shifts to granular facets of agiles, like Scrum ceremonies, instead of actual content and context, agile’s true principles are lost, says Prashant Kelker, lead partner for digital sourcing and solutions, Americas, at global technology research and advisory firm ISG. Agility is about shipping as well as development. “Developing software using agile methodologies is not really working if one ships only twice a year,” Kelker warns, by way of example. “Agility works through frequent feedback from the market, be it internal or external.” Too often organizations focus on going through the motions without an eye toward achieving business results. Agility is not only about adhering to a methodology or implementing particular technologies; it’s about business goals and value realization. “Insist on key results every six months that are aligned to business goals,” Kelker says. When a team lacks a dedicated product owner and/or Scrum master, it will struggle to implement the consistent agile practices needed to continuously improve and meet predictable delivery goals. CIOs need to ensure they have dedicated team members, and that the product owner and Scrum master thoroughly understand their roles.


Top 10 Microservices Design Principles

Microservices-based applications should have high cohesion and low coupling. The idea behind this concept is that each service should do one thing and do it well, which means that the services should be highly cohesive. These services should also not depend on each other, which means they should have low coupling. The cohesion of a module refers to how closely related its functions are. Having a high level of cohesion implies that functions within a module are inextricably related and can be understood as a whole. Low cohesion suggests that the functions within a module are not closely related and cannot be understood as a set. The higher the cohesion, the better – we may say that the modules are working together. Coupling measures how much knowledge one module has of another. A high level of coupling indicates that many modules know about each other; there is not much encapsulation between modules. The low level of coupling indicates that many modules are encapsulated from one another. When components in an application are loosely coupled, you can test the application easily as well.



Quote for the day:

"To be a good leader, you don't have to know what you're doing; you just have to act like you know what you're doing." -- Jordan Carl Curtis

Daily Tech Digest - September 14, 2022

A vision for making open source more equitable and secure

There have been multiple attempts at providing incentive structures, typically involving sponsorship and bounty systems. Sponsorship makes it possible for consumers of open source software to donate to the projects they favor. Only projects at the top of the tower are typically known and receive sponsorship. This biased selection leads to an imbalance: Foundational bricks that hold up the tower attract few donations, while favorites receive more than they need. In contrast, tea will give package maintainers the opportunity to publish their releases to a decentralized registry powered by a Byzantine fault-tolerant blockchain to eliminate single sources of failure, provide immutable releases, and allow communities to govern their regions of the open-source ecosystem, independent of external agendas. Because of the package manager’s unique position in the developer tool stack—it knows all layers of the tower—it can enable automated and precise value distribution based on actual real-world usage.


Cognitive Overload: The hidden cybersecurity threat

Cognitive overload occurs when workers are trying to take in too much information or execute too many tasks. This typically falls under two areas for cybersecurity analysts: intrinsic load, the piecing together of complex technical information to perform incident response activities; and extraneous load, the other 97% of data in a SIEM that they must filter out, while also handling team conversations and sidebar questions. Ultimately, cognitive overload leads to poor performance levels, a lack of focus, and a lack of fulfillment. This can have particularly detrimental consequences within cybersecurity, where ransomware attacks rose 13% year-over-year – more than the past five years combined. To boot, just under half of senior cyber professionals (45%) have considered quitting the industry altogether because of stress. To accommodate the needs of this critical workforce – and fill the 771,000 cyber positions open today – companies must make easing cognitive overload a top priority. Today, it stems from two major issues. First, organizations typically lack direction in cybersecurity, tasking analysts with a broad and daunting: defend our infrastructure. It’s too abstract and leaves them unsure of their roles and responsibilities. 


Medical device vulnerability could let hackers steal Wi-Fi credentials

A vulnerability found in an interaction between a Wi-Fi-enabled battery system and an infusion pump for the delivery of medication could provide bad actors with a method for stealing access to Wi-Fi networks used by healthcare organizations, according to Boston-based security firm Rapid7. The most serious issue involves Baxter International’s SIGMA Spectrum infusion pump and its associated Wi-Fi battery system, Rapid7 reported this week. The attack requires physical access to the infusion pump. The root of the problem is that the Spectrum battery units store Wi-Fi credential information on the device in non-volatile memory, which means that a bad actor could simply purchase a battery unit, connect it to the infusion pump, and quicky turn it on and off again to force the infusion pump to write Wi-Fi credentials to the battery’s memory. Rapid7 added that the vulnerability carries the additional risk that discarded or resold batteries could also be acquired in order to harvest Wi-Fi credentials from the original organization, if that organization hadn’t been careful about wiping the batteries down before getting rid of them.


Four Action Steps for Shoring Up OT Cybersecurity

Having proactive safeguards in place is important, but it’s also critical to have effective reactive procedures ready to respond to intrusions, especially to quickly restore the integrity of operations, applications, data, or any combination of the three. Key ICS and SCADA functions should be backed up with hot standbys featuring immediate failover capabilities should their primary counterparts be disrupted. For data protection, automated and contemporaneous backups are preferable; or at least they should be done at a weekly interval. Ideally, the backup storage will be off-network and, even better, offsite, too. The former protects backup data in case malware, such as ransomware, succeeds in circumventing defense-in-depth and network segmentation measures and locks it up. ... Like plant health, safety and environment (HSE) programs, cybersecurity should be considered alongside them as a required mainstay risk-reduction program with support from executive management, owners, and the board of directors.


The Future of the Web: The good, the bad and the very weird

The rise of big technology companies over the last two decades has made the internet more usable for most people, but has also lead to the creation of a series of 'walled gardens' controlled by them, within which information is held and not easily relocated. As a result, a small number of very large companies control what you search for online, or where you share information with your friends, or even do your shopping. Even worse, these companies have done much to develop what is effectively 'surveillance capitalism' -- taking the information we have shared with them (about what we do, where we go and who we know) to sell to advertisers and others. As smartphones have become one of the key ways we access the web, that surveillance capitalism now follows us wherever we go. And while the rise of social media (the so-called 'Web 2.0' era) promised to make it possible for individuals to produce and share their own content, it was still mostly the big tech companies that remained the gatekeepers. A platform that was once about openness seems to be dominated by big tech.


Authorization Challenges in a Multitenant System

Restricting users to the data that belongs to their tenant is the most fundamental requirement of multitenant authorization. Tenant isolation barriers are needed to prevent users from accessing sensitive information owned by another account. Such a breach would erode trust in your service and, depending on the type of exposure that occurred, could leave you liable to regulatory penalties. Tenant identification usually occurs early in the lifecycle of a request. Your service should authenticate the user, determine the tenant they belong to, and then limit subsequent interactions to data that’s associated with that tenant. ... Another complication occurs when tenants require unique combinations of roles and actions to mirror their organization’s structures. One org might be satisfied by admin and read-only roles; another may need the admin role to be split into five distinct assignments. The most effective multitenant authorization systems will flexibly accommodate customizations on a per-tenant basis. At the application level, granular permission checks will remain the same; however, the system will need to be configurable so tenants can create their own roles by combining different permissions.


Deployment Patterns in Microservices Architecture

The Multiple Service Instances per Host pattern involves provisioning one or more physical or virtual hosts. Each of the hosts then executes multiple services. In this pattern, there are two variants. Each service instance is a process in one of these variants. In another variant of this pattern, more than one service instance might run simultaneously. One of the most beneficial features of this pattern is its efficiency in terms of resources, as well as its seamless deployment. This pattern has the benefit of having a low overhead, making it possible to start the service quickly. This pattern has the major drawback of requiring a service instance to run in isolation as a separate process. The resource consumption of each instance of a service becomes difficult to determine and monitor when several processes are deployed in the same process. The Service Instance per Host pattern is a deployment strategy in which only one microservice instance can execute on a particular host at a specific time. Note that the host can be a virtual machine or a container running just one service instance simultaneously.


Bursting the Microservices Architectures Bubble

The buzz surrounding microservices in recent years doesn't reflect the sudden emergence of the microservices concept at that time, however. Microservices architectures actually have a long history that stretches back decades. But they didn't really catch on and gain mainstream focus until the early-to-mid 2010s. So, why did everyone go gaga over microservices starting about ten years ago? That's a complex question, but the answer probably involves the popularization around the same time of two other key trends: DevOps and cloud computing. You don't need microservices to do DevOps or use the cloud, but microservices can come in handy in both of these contexts. For DevOps, microservices make it easier in certain important respects to achieve continuous delivery because they allow you to break complex codebases and applications into smaller units that are easier to manage and easier to deploy. And in the cloud, microservices can help to consume cloud resources more efficiently, as well as to improve the reliability of cloud apps.


New Survey Shows 6 Ways to Secure OT Systems

A fundamental principle of OT security is the need to create an air gap between ICS and OT systems and IT systems. This basic network cybersecurity design employs an industrial demilitarized zone to prevent threat actors from moving laterally across systems, but the survey finds that only about half of organizations have an IDMZ within their OT architecture, and 8% are working on it. The healthcare, public health and emergency services sectors were especially behind. Nearly 40% of respondents in those sectors don't have plans to implement an IDMZ. Implementing a DMZ is a basic best practice, Ford says. "The risk is lateral movement where breach can move from IT to OT or vice versa, or from low-value network assets to high-value network assets," Ford says. "The more attackers can penetrate your infrastructure, the greater damage and downtime they can cause. Segmentation in DMZ, demilitarized zones, provide an air gap between IT and OT, and additional segmentation can further protect business-critical assets with strong access controls, firewalls and policy rules based on zero trust." 


Wearable devices: invasion of privacy or health necessity?

Dangling a carrot of free technology is a way to engage customers, but protection is vital should wearable technology be compromised. This data isn’t simply name, address and payment details, but potentially highly personal data about an individuals’ wellbeing. The insurance industry will need to develop solutions that help protect the policyholder, and reassure the individual that their data is secure. With GDPR, UK-GDPR and other regulations globally to be highly considered, insurers are spending considerable time and investment in ensuring data is well protected. The ubiquitous nature of wearables has helped increase engagement with insurance, and customers have been introduced to the numerous health benefits of using these devices. If you’ve already got a device tracking your wellbeing, why would you not want a doctor also doing the same? By becoming an extension to the wearable itself, wearable insurance is likely to be generally accepted by customers.



Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye