Showing posts with label methodology. Show all posts
Showing posts with label methodology. Show all posts

Daily Tech Digest - August 06, 2023

California Opens Privacy Probe Into Car Data Collection

Modern vehicles are equipped with a wide range of sensors, cameras, and other technologies that generate vast amounts of data. This data includes information about the vehicle’s location, speed, acceleration, braking, and even driver behavior. Additionally, connected car systems can collect data on music preferences, navigation history, and other personal preferences. Car data is collected by various parties, including automakers, technology companies, and third-party service providers. This data is used for a variety of purposes, such as improving vehicle performance, developing new features, and providing personalized services to consumers. However, concerns have been raised about the potential misuse or unauthorized access to this sensitive information. The investigation by the California Privacy Agency highlights the importance of protecting consumer privacy in the context of car data collection. As vehicles become more connected and autonomous, the amount of data being generated increases exponentially. 


An eventful week in the world of Arm and RISC-V

What’s most intriguing though with all of these coincidental events though is the NXP Semiconductors’ announcement. Almost all the initial investor companies announced in the new, unnamed organization, are also Arm licensees. The press release states: “Semiconductor industry players Robert Bosch GmbH, Infineon Technologies AG, Nordic Semiconductor, NXP Semiconductors, and Qualcomm Technologies, Inc., have come together to jointly invest in a company aimed at advancing the adoption of RISC-V globally by enabling next-generation hardware development.” So, was this strategically timed to coincide with Arm’s annual meet? What’s also intriguing is that the announcement says a new company has been formed but the company isn’t named. Maybe the disclaimer is the added statement that “the company formation will be subject to regulatory approvals in various jurisdictions.” The new unnamed company formed in Germany also “calls on industry associations, leaders, and governments, to join forces in support of this initiative which will help increase the resilience of the broader semiconductor ecosystem.”


How Agile Management Disrupts the Status Quo

As a relatively newer project management methodology, you might wonder how agile differs from the typical or traditional project or team management approach an organization might use—and how it disrupts those traditional approaches. Agile principles are designed to allow for more seamless collaboration, feedback, and flexibility to ensure faster and more thorough success in bringing high-quality products to market. Agile methodology and coaching should focus on bringing together stakeholders, developers, programmers, and end-users to support the underlying principles. This management methodology encourages and facilitates ongoing conversations and regular communication as a primary means of measuring progress with incremental development. However, “incremental” movement doesn’t necessarily translate to slowing down the process. In fact, team member input—and, importantly, user input—ultimately allows for a more effective, functional, and satisfying final product.


A Journey Through Software Development Paradigms

In the quest for seamless collaboration and integration between development and operations, we encounter DevOps, a paradigm that bridges the gap between siloed teams and fosters a culture of continuous integration, delivery, and learning. We explore the triumphs and challenges faced by organizations adopting DevOps, witnessing its potential to accelerate software delivery, improve quality, and enhance customer experiences. Beyond the familiar shores of Agile and DevOps, our journey ventures into the uncharted territories of emerging paradigms, each holding the promise of further transformation. Lean Software Development, Continuous Delivery, and Site Reliability Engineering (SRE) await our exploration, revealing new insights and practices that continue to shape the future of software development. As we reach the culmination of our voyage, we stand in awe of the pioneers and visionaries who have paved the way for progress, embracing adaptation and innovation in the pursuit of excellence. 


The Rise of Emotionally Aware Technology: A Deep Dive into Global Affective Computing

One of the key drivers behind the rise of affective computing is the increasing demand for personalized user experiences. Today’s consumers expect their devices to understand their needs and preferences and to respond accordingly. Emotionally aware technology can meet these expectations by adapting its responses based on the user’s emotional state. For example, a virtual assistant that can detect frustration in a user’s voice could offer to simplify its instructions or provide additional support. Another factor contributing to the growth of affective computing is the advancement in machine learning and AI technologies. These technologies enable computers to learn from data and improve their performance over time, making it possible for them to recognize and interpret complex human emotions. For instance, facial recognition software can now analyze subtle facial expressions to determine a person’s mood, while natural language processing can interpret the emotional tone in written text.


Digital twins: The key to smart product development

In advanced industries, survey data indicate that almost 75 percent of companies have already adopted digital-twin technologies that have achieved at least medium levels of complexity. There is significant variance between sectors, however. Players in the automotive—and aerospace and defense—industries appear to be more advanced in their use of digital twins today, while logistics, infrastructure, and energy players are more likely to be developing their first digital-twin concepts. One major aerospace company is developing a machine-learning-based geometry optimization system that can simulate thousands of different configurations at high speed to identify weight savings, aerodynamic improvements, and other performance benefits. A European software company is building a multiphysics model of the human heart to support drug and medical-device development. In the United States, an automotive company is building a system that can model all the software and hardware configurations it offers. The system will be used to simulate the effect of design improvements before they are delivered to customers as over-the-air updates. 


Four technology disruptions organizations must watch

Digital humans are becoming more and more like real people. They are readily available and have the ability to interact over a screen to handle a service-based issue or provide customer service instantly. As digital human software is integrated with natural language processing and robotic process automation tools, digital humans will become more of a presence in workflows of more and more processes. Consulting leaders should focus, both singly and in tandem, with leaders of other parts of an organization, on crafting approaches their clients can use to leverage a digital human workforce. Service delivery leaders — particularly within business process outsourcing providers — should be developing a strategy to deploy digital humans within their service delivery functions. ... A decentralized autonomous organization (DAO) is a digital entity, running on a blockchain (which provides a secure digital ledger for communication tracking), that can engage in business interactions with other DAOs, digital and human agents, as well as corporations, without conventional human management.


Bitcoin Beyond the Currency – the Disruption of Industries

The Bitcoin economy has the potential to become the biggest economy in the world; bigger than the United States or China. Bitcoin is a solution for everyone in the world who lives in fear of inflation risk, currency risk, or regime risk. A global, decentralized, trustless settlement layer and means of exchange with no state backing or intervention. For that to happen, BTC has to be more than a store value, it has to be a currency. We have to stop thinking about it in terms of market capitalization and start thinking about it in terms of a gross decentralized product, the “GDP” of the Bitcoin economy. One doesn’t talk about the market capitalization of the dollar, we shouldn’t think of Bitcoin in those terms either. Bitcoin is continuing to become increasingly vital as legacy institutions fall behind the strides being made in the technology sector. These breakthroughs are significantly disrupting incumbent industries ranging from those commonly considered such as banking and finance, to more unique industries such as insurance and energy.


Mitigating AI Risks: Tips for Tech Firms in a Rapidly Changing Landscape

Keep in mind: despite their capabilities, large language models can’t tell between what’s real and what’s not. And when asked to verify if something is true, they “frequently invent dates, facts, and figures.” While this stresses the importance of fact-checking on the end-user’s part, you could still face a lawsuit for defamation if any misleading information is published or shared with the public. In fact, ChatGPT-creator OpenAI is already being sued for libel after the system made false accusations against a radio host in the United States, claiming that he had embezzled funds from a non-profit organization. This is the first case of this nature against OpenAI, which could test the legal viability of any future AI-related defamation lawsuits. However, some legal experts believe the case may be challenging to maintain since there were no actual damages and OpenAI wasn’t notified about the claims or given the opportunity to remove them. Beyond defamation, tech firms that deploy large language models in user support systems can also face general liability risks relating to physical harm.


Data Democratization’s Impact on Users and Governance

A key result of increased user involvement in the nuts and bolts of data is the increased importance of data literacy throughout the organization, Stodder added. “It’s essential for organizations to understand what their current capabilities are and to make a plan to address any stumbling block they’re having.” Training tailored to the full range of user personas, from advanced users to more basic data consumers, will be critical to any data democratization effort. ... Another critical aspect of a democratization effort is an effective governance program. “Organizations can easily expand their data programs faster than they expand their governance programs,” Stodder explained, “which, given the existing strain placed on governance by regulations and the complexity of the data landscape, can only compound the problems.” Some of these governance issues can also be exacerbated by the distributed nature of a democratized landscape. “Many organizations are trying to consolidate to a kind of hub-and-spoke model,” Stodder said, “which has been effective for many of them. 



Quote for the day:

“When something is important enough, you do it even if the odds are not in your favor.” --
Elon Musk

Daily Tech Digest - July 26, 2022

Don’t get too emotional about emotion-reading AI

Unfortunately, the “science” of emotion detection is still something of a pseudoscience. The practical trouble with emotion detection AI, sometimes called affective computing, is simple: people aren’t so easy to read. Is that smile the result of happiness or embarrassment? Does that frown come from a deep inner feeling, or is it made ironically or in jest. Relying on AI to detect the emotional state of others can easily result in a false understanding. When applied to consequential tasks, like hiring or law enforcement, the AI can do more harm than good. It’s also true that people routinely mask their emotional state, especially in in business and sales meetings. AI can detect facial expression, but not the thoughts and feelings behind them. Business people smile and nod and empathetically frown because it’s appropriate in social interactions, not because they are revealing their true feelings. Conversely, people might dig deep, find their inner Meryl Streep and feign emotion to get the job or lie to Homeland Security. In other words, the knowledge that emotion AI is being applied creates a perverse incentive to game the technology.


How AI and decision intelligence are changing the way we work

Technology can also provide a simple yet powerful AI tool for employees to use during their day-to-day activities. They can capture lessons learned as they work in real time, and adjust their actions when a corrective action is needed, also in real time. Throughout this process, AI defines actionable takeaways, shares insights and offers concise lessons learned (suggesting corrective actions, for example), all of which can boost the entire team’s performance. Since AI turns the data collected from daily work into actionable lessons learned, every team member can contribute to and draw on their team’s collective knowledge — and the entire company’s collective knowledge as well. The technology prompts them to capture their work, and it “knows” when a team member should see information relevant to their current task. AI ensures everyone has the right data at the right time, exactly when they need it. In this vision of a data-driven environment, access to data liberates and empowers employees to pursue new ideas, Harvard Business Review writes.


The emergence of multi-cloud networking software

Contrary to general perception, Hielscher argues that many enterprises do not voluntarily choose to operate within a multi-cloud environment. In many cases, the environment is thrust upon them through a merger, acquisition, or an isolated departmental choice that preceded a decision to consolidate architectures. "This results in organizational gaps, skill-set gaps, and contractual and spending overlaps," he explains. "As with any IT strategy, the first step is to establish which goals are to be addressed and the timeframes to address them in." Potential adopters should be prepared to spend both time and money when evaluating and comparing MCNS products. "For example, organizations should plan costs associated with staffing a team of engineers to see them through the evaluation process," Howell says. While virtually all large cloud-focused enterprises, and many smaller organizations, can benefit from the right MCNS, it's important to keep an eye on service and the bottom line. "Benefits to the enterprise must be greater than the cost of the solution," Howell warns.


Software Methodologies — Waterfall vs Agile vs DevOps

Software development projects that are clearly defined, predictable, and unlikely to undergo considerable change are best handled using the waterfall method. Typically, smaller, simpler undertakings fall under this category. Waterfall projects don't incorporate feedback during the development cycle, is rigid in their process definition, and offer little to no output variability. Agile methods are built on incremental, iterative development that promptly produces a marketable business product. The product is broken down into smaller pieces throughout incremental development, and each piece is built, tested, and modified. Agile initiatives don't begin with thorough definitions in place. They rely on ongoing feedback to guide their progress. In Agile development, DevOps is all about merging teams and automation. Agile development is adaptable to both traditional and DevOps cultures. In contrast to a typical dev-QA-ops organization, developers do not throw code over the wall in DevOps. In a DevOps setup, the team is in charge of overseeing the entire procedure.


Why you need to protect abandoned digital assets

The dangers posed by these abandoned assets are multifarious. Local digital assets can be usurped and used for malicious purposes, such as identity theft and credit card fraud. Not only does this leave organisations open to significant fines for breaches of data protection laws, there is the associated reputational harm caused by these incidents. “The risk depends what the connection is pointing to and what authentication or security measures have been put in place,” says Nahmias. “Security teams tend to be more lenient about connections to internal resources than they are about connections to external ones.” The distributed nature of modern enterprise means that networks are no longer spiders webs, but a complex mesh. While this is a far more robust form of network connectivity, there are also far more connections that need to be managed. As such, there is a potential risk of network connections from abandoned assets still being active, essentially permitting access to the rest of the corporate network. In many ways, this is a far greater risk to the organisation, as malicious actors could potentially obtain confidential information through these unsecured connections.


How the cybersecurity skills gap threatens your business

The deficit in skilled cybersecurity personnel is now directly affecting businesses’ ability to remain secure. The World Economic Forum has stated that 60 per cent would “find it challenging to respond to a cybersecurity incident owing to the shortage of skills within their team” and industry body ISACA found that 69 per cent of those businesses that have suffered a cyber attack in the past year were somewhat or significantly understaffed. The impacts can be devastating. Accreditation body ISC(2)’s Cybersecurity Workforce Study found that staff shortages were leading to misconfigured systems, tardy patching of systems, lack of oversight, insufficient risk assessment, lack of threat awareness and rushed deployments. With these shortages now jeopardising businesses’ ability to function, the hiring function is under significant pressure to up its game. To make matters worse, these shortages are expected to intensify. Last year the Department for Culture, Media and Sport (DCMS) predicted there would be an annual shortfall of 10,000 new entrants into the cybersecurity market but in its latest report, released in May, that was revised to 14,000 every year. 


Kanban vs Scrum: Differences

Kanban is a project management method that helps you visualize the project status. Using it, you can readily visualize which tasks have been completed, which are currently in progress, and which tasks are still to be started. The primary aim of this method is to find out the potential roadblocks and resolve them ASAP while continuing to work on the project at an optimum speed. Besides ensuring time quality, Kanban ensures all team members can see the project and task status at any time. Thus, they can have a clear idea about the risks and complexity of the project and manage their time accordingly. However, the Kanban board involves minimal communication. ... Scrum is a popular agile method ideal for teams who need to deliver the product in the quickest possible time. This involves repeated testing and review of the product. It focuses on the continuous progress of the product by prioritizing teamwork. With the help of Scrum, product development teams can become more agile and decisive while becoming responsive to surprising and sudden changes. Being a highly-transparent process, it enables teams and organizations to evaluate projects better as it involves more practicality and fewer predictions.


8 top SBOM tools to consider

Indeed, SBOMs are no longer just a good idea; they're a federal mandate. According to President Joe Biden's July 12, 2021, Executive Order on Improving the Nation’s Cybersecurity, they're a requirement. The order defines an SBOM as "a formal record containing the details and supply chain relationships of various components used in building software." It's an especially important issue with open-source software, since "software developers and vendors often create products by assembling existing open-source and commercial software components." Is that true? Oh yes. We all know that open-source software is used everywhere for everything. But did you know that managed open-source company Tidelift counts 92% of applications containing open-source components. In fact, the average modern program comprises 70% open-source software. Clearly, something needs doing. The answer, according to the Linux Foundation, Open Source Security Foundation (OpenSSF), and OpenChain are SBOMs. Stephen Hendrick, the Linux Foundation's vice president of research, defines SBOMs as "formal and machine-readable metadata that uniquely identifies a software package and its contents; it may include other information about its contents, including copyrights and license data.


The race to build a social media platform on the blockchain

DSCVR, a blockchain-based social network built on Dfinity’s Internet Computer protocol, has entered the race to build a scalable DeSo platform with $9 million in seed funding led by Polychain Capital. Other participants in the round include Upfront Ventures, Tomahawk VC, Fyrfly Venture Partners, Shima Capital and Bertelsmann Digital Media Investments (BDMI), according to the company. It’s a competitive space with plenty of startups and large companies racing to build a network that provides utility for its users. Earlier this month, ex-Coinbase employee Dan Romero secured $30 million led by a16z to develop Farcaster, a DeSo protocol that allows users to move their social identity across different apps. TechCrunch covered another seed-stage startup, Primitives, that raised a $4 million round in May for its own Solana-based DeSo network. Big tech is in the game, too — Twitter funds an offshoot of its service called BlueSky, an open-source DeSo project founded in 2019 that hasn’t gone live but is experimenting publicly with its development process.


7 ways to keep remote and hybrid teams connected

Marko Gargenta, CEO and founder of PlusPlus, a maker of internal training software that he founded after creating Twitter’s Twitter University, uses that idea to create company culture. It started at Twitter because he saw that some people had deep knowledge in topics that would benefit others. He started tapping them to give workshops and share that knowledge. Those 30-minute workshops were informal, in person, and wildly popular. “One in five engineers were regularly teaching classes,” he says. Those continued when the world went remote, but they shifted to canned videos. Those did not have the same impact. “People wanted human connection,” he says. “So, we started dialing the pendulum back toward live connection. Now they happen over Zoom but are very synchronous.” That has worked well. “If you look at ancient Greece,” says Gargenta, “Plato started The Academy. It was the place where people chasing ideas or mastery congregated, which created a sense of a culture. This pattern of people chasing mastery creates community. It’s what shaped ancient Greece, and all sorts of innovations came out of that.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry

Daily Tech Digest - October 23, 2021

How Artificial Intelligence is Changing DevOps?

AI automatic testing tools can generate tests automatically, and that too with little to no code at all, so developers don’t have to worry about writing test codes. The AI is evolved enough to automatically generate tests by learning the app flows, screens, and elements that require little to no human involvement. The automation tools are so well built and perform automated audits or checks so frequently that there are almost no instances of errors. They capture feedback at every instant and analyze the input and identify the errors in real-time. The intelligence of the tools allows the developers or the team members to reduce participation in test automation creation activities and free up their time to focus on more important and urgent tasks. And eventually, develop a more productive system for the organization. AI is proficient in handling big data with minimal human involvement. For DevOps, this means that the huge data sets can now be managed with minimal effort. Since DevOps involves and impacts three functions of an organization simultaneously, it also has tons of data to be managed and maintained on an everyday basis.


How low-code/no-code solutions and automation can triage employee turnover

We are continuing to see AI getting better but it needs to be applied in the right places. For example, teams can automate more of their processes and manual tasks, improving workflows and reducing busywork for agents. Costs are reduced, and customer demand is more easily met, which has proven to lead to happier, more productive support teams. “Solving customer problems and leaving a customer happy is what gets an agent excited about getting up every day and going into the job,” Wolverton says. “We strongly believe getting all of that repetitive work and processes automated so that they can focus on the rewarding work is what’s going to keep their motivation high and keep them in your organization.” It’s also about letting customers help themselves, she adds — more and more, customers want their answer fast, and waiting on the phone for the next available agent isn’t going to cut it. If you can get people their answer quickly and accurately through a search or through a bot, and then only escalate when the issue becomes more complex and a human is uniquely qualified to handle the issue, you’re going to have far more satisfied customers.


Non-Traditional Cybersecurity Career Paths: Entering the Industry

“I’d never considered cyber or even information technology as a career growing up. My interests always piqued around history and physics. I in fact failed first-year engineering for having written an essay on David Hume when asked to discuss induction in engineering. I have an undergraduate degree with a double major in history & philosophy of science and quantum physics. I continued down this path, working in the university’s quantum computing department on the development of quantum circuitry. My work centered on the development of superconducting diamond[s], looking to test and establish the reality of theoretical models predicting room-temperature superconductivity. I believed in making Marty McFly’s future a reality; I was on the path to making superconducting circuitry with the sci-fi application of a hoverboard — although I still don’t believe it’d be able to hover across water. “One day while taking adult skiing lessons with an instructor (now my fiancé), I realized my skillsets weren’t technically focused but operational. I’d spent my theses developing, constructing and rebuilding processes.


Agile talent: How to revamp your people model to enable value through agility

When you cut through it, making the move to agile means you’re really going to be breaking the company down into self-sufficient, multidisciplinary, multidimensional teams. That’s the very essence of agile. However, it’s not all about structure. There are many barriers that must be removed to allow those teams to really work. Some barriers you don’t quite realize are there, and many other barriers don’t appear as barriers today but do appear as barriers going forward. So if you do move the organization to agile, be prepared to drive through a number of the barriers. Because you only really get the true benefit that lies in agile if you’re prepared to put those to the stake. I have talked to many organizations interested in the transition to agile, and in the early conversations the focus is understandably always on the organization’s structure. Having “seen the movie,” and helped many companies in the making of their movie, if I had $100 to spend on agile, I’d put only $10 to $15 against organizational structure. All of the rest I would invest in agile ceremonies and processes, particularly in the people processes.


What Are Low-Code/No-Code Platforms?

Low-code/no-code platforms and capabilities are now being provided by a wide range of providers including startups trying to fill various niches in the technology all the way up to the large enterprise products and services companies. We have covered the low-code/no-code options that are available with Microsoft, Google and Amazon previously. While there is plenty of crossover ability to connect to the other companies’ products and services, Amazon is the only one that lacks any ability to tie into data that might be hosted on the other two low-code/no-code platforms. Choosing a low-code/no-code platform will likely be impacted by where an organization has its data located. Just like other services offered by these big three companies, it is much easier to work within the same ecosystem rather than mixing and matching across low-code/no-code tools. Once that decision is made, the work of building out those first low-code tools for an organization should be fairly straightforward. Low-code/no-code development intentionally targets knowledge workers who have familiarity with the processes and workflows within their business unit, department or division but do not necessarily have any coding experience.


PostgreSQL v14 Is Faster, and Friendly to Developers

This release also brings more features to parallel query execution, in which PostgreSQL can devise query plans that can leverage multiple CPUs to answer queries faster. Now your database can execute queries in parallel for RETURN QUERY and REFRESH MATERIALIZED VIEW. More prominent updates include pipeline mode for LibPQ, which is the interface that developers use to connect their application to the database. With PostgreSQL, they now have the ability to use a pipeline mode. LibPQ used to be single-threaded, where it would wait for one query to complete execution before sending the next one to the database. Now devs can feed multi-transactions into the pipeline and LibPQ will execute them turn by turn to feedback results into the application. The application no longer has to wait for the first transaction to complete to execute the next one. This was one of the updates in which Shahid commented, “Why did we not think about this earlier? This is such a no-brainer! But that’s how technology progresses.” Another potential no-brainer-in-hindsight is an upgrade to TOAST, which now allows for LZ4 compression. TOAST is a system that allows the storage of much larger data. 


Encouraging STEM uptake: why plugging the skills gap starts at school

Part of the challenge for businesses has been that leaders and recruiters still use assumptions about the value of certain backgrounds and degrees as the basis for their hiring strategy. This has been a particular issue in the technology industry where a formal ‘technical background’ has long been viewed as a minimum requirement to get on the career ladder. In some forward-thinking companies, however, there is more value now being placed on soft skills, such as creativity, persuasion and collaboration. These companies also recognise that employees can build specialist technical skills via routes such as internships, apprenticeships or on the job training. To play a full role in building the STEM workforce, businesses should also offer wider support to organisations that are working to ensure equal opportunities for girls and women. Code First Girls is one of a growing number of organisations that support young adult and working age women, in their case, “to become kick-ass developers and future leaders.” Businesses that are committed to equality of opportunity in their technical teams can help promote inclusion and tap into the female talent pool by working with these like-minded organisations.


Simplifying the complex: Introducing Privacy Management for Microsoft 365

Staying ahead of data privacy regulations and understanding the technical actions you can take to address compliance can be daunting. To help, Microsoft Compliance Manager today has more than 200 regulatory assessment templates covering global, industrial, and regional Data Protection and Privacy regulations, making it easier for customers to interpret, assess, and improve their compliance with regulatory requirements. We recently added three privacy-specific assessments for Colorado Privacy Act, Virginia Consumer Data Protection Act (CDPA), and Egypt Privacy Law. Additionally, we have mapped privacy-specific controls across these assessment templates to the new Privacy Management solution to help you scale your compliance efforts. You can learn more about Compliance Manager, our list of available assessments, and how to use the assessment in our documentation. You can also try the Compliance Manager 90-day trial, which gives you access to 25 assessments. Privacy is a journey 


Remote and hybrid work: 4 tips to ease onboarding

By their nature, hybrid or remote office environments encourage asynchronous collaboration, as not everyone will be online or in the office at the same time. To make asynchronous workflows more manageable, consider the following tips: Minimize context switching by muting unnecessary communication channels, not feeling the need to respond immediately, and using messaging apps like Slack asynchronously; Set up Slack channels for different languages so people can easily communicate with one another on their own time (this is particularly helpful if you’re working with developers from around the world); Use project management tools, such as Jira, which allow everyone to provide input into projects on their own time. These tools also help reduce Zoom fatigue while giving team members the chance to complete tasks irrespective of their time zones. Working in a remote or hybrid environment can be challenging for many teams. But these recommendations can help you reap significant benefits. You’ll have a chance to attract, retain, and get the most out of other talented developers and IT managers with unique perspectives and different backgrounds – and that will help everyone succeed.


Promoting Creativity in Software Development with the Kaizen Method

The Kaizen method creates continuous improvements by implementing constant positive changes. Over time, these small, gradual improvements can produce significant results. It has long been a key principle of lean manufacturing methods. In English, the word "kaizen" means change for the better (kai = change, zen = good). The philosophy was first introduced at Toyota in Japan after World War II. The car manufacturer formed quality circles — groups of workers who perform similar tasks — in its production process. The teams met regularly to identify and review work-related problems, analyze the situation, and offer improvement suggestions. ... By applying the Kaizen proactive model, SenecaGlobal recently initiated an innovative process to improve the billing rate for a key client by implementing agile methodologies and conducting regular risk assessments for delivery timelines. As part of discovery, the developers uncovered a way to eliminate the need for a third-party software solution to decrypt/encrypt credit card payments, which resulted in significant cost savings. 



Quote for the day:

"One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man." -- Elbert Hubbard

Daily Tech Digest - July 03, 2020

Designing data governance that delivers value

Without quality-assuring governance, companies not only miss out on data-driven opportunities; they waste resources. Data processing and cleanup can consume more than half of an analytics team’s time, including that of highly paid data scientists, which limits scalability and frustrates employees. Indeed, the productivity of employees across the organization can suffer: respondents to our 2019 Global Data Transformation Survey reported that an average of 30 percent of their total enterprise time was spent on non-value-added tasks because of poor data quality and availability ... The first step is for the DMO to engage with the C-suite to understand their needs, highlight the current data challenges and limitations, and explain the role of data governance. The next step is to form a data-governance council within senior management (including, in some organizations, leaders from the C-suite itself), which will steer the governance strategy toward business needs and oversee and approve initiatives to drive improvement—for example, the appropriate design and deployment of an enterprise data lake—in concert with the DMO. The DMO and the governance council should then work to define a set of data domains and select the business executives to lead them.


How to Kill Your Developer Productivity

The problems start when teams get carried away with microservices and take the "micro" a little too seriously. From a tooling perspective you will now have to deal with a lot more yml files, docker files, with dependencies between variables of these services, routing issues, etc. They need to be maintained, updated, cared for. Your CI/CD setup as well as your organizational structure and probably your headcount needs a revamp. If you go into microservices for whatever reason, make sure you plan sufficient time to restructure your tooling setup and workflow. Just count the number of scripts in various places you need to maintain. ... Kubernetes worst case: Colleague XY really wanted to get his hands dirty and found a starter guide online. They set up a cluster on bare-metal and it worked great with the test-app. They then started migrating the first application and asked their colleagues to start interacting with the cluster using kubectl. Half of the team is now preoccupied with learning this new technology. The poor person that is now maintaining the cluster will be full time on this the second the first production workload hits the fan.


A Brief History of Data Lakes

Data Lakes are consolidated, centralized storage areas for raw, unstructured, semi-structured, and structured data, taken from multiple sources and lacking a predefined schema. Data Lakes have been created to save data that “may have value.” The value of data and the insights that can be gained from it are unknowns and can vary with the questions being asked and the research being done. It should be noted that without a screening process, Data Lakes can support “data hoarding.” A poorly organized Data Lake is referred to as a Data Swamp. Data Lakes allow Data Scientists to mine and analyze large amounts of Big Data. Big Data, which was used for years without an official name, was labeled by Roger Magoulas in 2005. He was describing a large amount of data that seemed impossible to manage or research using the traditional SQL tools available at the time. Hadoop (2008) provided the search engine needed for locating and processing unstructured data on a massive scale, opening the door for Big Data research. In October of 2010, James Dixon, founder and former CTO of Pentaho, came up with the term “Data Lake.” Dixon argued Data Marts come with several problems, ranging from size restrictions to narrow research parameters.


What is agile enterprise architecture?

An important group of agility dimensions relates to the process of strategic planning, where business leaders and architects collectively develop the global future course of action for business and IT. One of these dimensions is the overall amount of time and effort devoted to strategic planning. Some companies invest considerable resources in the discussions of their future evolution, while other companies pay much less attention to these questions. Another dimension is the organisational scope covered by strategic planning. Some companies embrace all their business units and areas in their long-range planning efforts, while others intentionally limit the scope of these efforts to a small number of core business areas. A related dimension is the horizon of strategic planning. Some organisations plan for no more than 2-3 years ahead, but others need a five-year, or even longer, planning horizons. Yet another relevant dimension is how the desired future is defined. Some companies create rather concrete descriptions of their target states, when others define their future only in terms of planned initiatives in investment roadmaps.


How to Guard Against Governance Risks Due to Shadow IT and Remote Work

Shadow IT evolves in organizations when workers, teams, or entire departments begin to improvise their work processes through unauthorized services or practices that operate outside the oversight and control of IT. It may involve something as seemingly harmless as storing work documents on a personal laptop, or it could pose a catastrophic risk by transferring confidential intellectual property or regulated private data via an unsecured personal file sharing service. ... Although productivity is critical, the use of personal cloud file services, ad hoc team network file shares, and personal email for file transfer undermine governance and represent material risk from a discovery, privacy, and noncompliance perspective. Without equipping your employees with productivity tools that address governance requirements, they pursue novel techniques without understanding the risks. Transferring documents via email, Dropbox, or Google Drive may seem ingenious; in reality, users may not understand the dangers posed by insufficient authentication or auditing or the direct violation of data privacy requirements. What's more, unmanaged deletion of work product may violate legal hold requirements.


How to Convince Stakeholders That Data Governance is Necessary

Often times, the data consumers don’t have an inventory of the data available to them. The consumers don’t have business glossaries, data dictionaries and data catalogs that house information about the data that will improve their understanding of the data (and access to the metadata might be a problem even if it is available). They don’t immediately know who to reach out to to request access to the data (that they may not know exists in the first place). And the rules associated with the data are not documented in resources that are available to data consumers, thus putting all of this effort, post hoop-jumping, at risk anyway. If you ask data consumers, casual data users, and data scientists what causes delays and problems completing their normal job, you can expect to get answers listed in the previous paragraph, that will boggle your mind. At that point, you will begin to understand the often mentioned 80/20 rule. This rule states that eighty percent of their time is spent data wrangling and the other twenty percent is spent actually doing the analysis, meaningful reporting and answering questions that is truly a part of their job.


Studying an 'Invisible God' Hacker: Could You Stop 'Fxmsp'?

Experts say the group was extremely well-organized and used teams of specialists, built a sophisticated botnet and sold remote access and exfiltrated data in the course of perfecting the botnet to help monetize those efforts. Or at least that was the group's MO until AdvIntel dropped a report in May 2019 documenting Fxmsp's activities. Shining a light on the gang - which relied in large part on advertising via publicly accessible cybercrime forums - caused the group to disappear. "The Fxmsp hacking collective was explicitly reliant on the publicity of their offers in the dark market auctions and underground communities," Yelisey Boguslavskiy, CEO of AdvIntel, tells me. After the report's release, he says Fxmsp disappeared from public view, although it's not clear if the hacker with that handle might still be operating privately. Study Fxmsp's historical operations, and a less-is-more ethos emerges. "In most cases, Fxmsp uses a very simple, yet effective approach: He scans a range of IP addresses for certain open ports to identify open RDP ports, particularly 3389. Then, he carries out brute-force attacks on the victim's server to guess the RDP password," Group-IB says in a recap.


4 common software maintenance models and when to use them

Quick-fix: In this model, you simply make a change without considering efficiency, cost or possible future work. The quick-fix model fits emergency maintenance only. Development policies should forbid the use of this model for any other maintenance motives. Consider forming a special team dedicated to emergency software maintenance. ... Iterative: Use this model for scheduled maintenance or small-scale application modernization. The business justification for changes should either already exist or be unnecessary. The iterative model only gets the development team involved. The biggest risk here is that it doesn't include business justifications -- the software team won't know if larger changes are needed in the future. The iterative model treats the application target as a known quantity. ... Reuse: Similar to the iterative model, the reuse model includes the mandate to build, and then reuse, software components. These components can work in multiple places or applications. Some organizations equate this model to componentized iteration, but that's an oversimplification; the goal here is to create reusable components, which are then made available to all projects under all maintenance models. 


Newly discovered principle reveals how adversarial training can perform robust deep learning

Why do we have adversarial examples? Deep learning models consist of large-scale neural networks with millions of parameters. Due to the inherent complexity of these networks, one school of researchers believe in a “cursed” result: deep learning models tend to fit the data in an overly complicated way so that, for every training or testing example, there exist small perturbations that change the network output drastically. This is illustrated in Figure 2. In contrast, another school of researchers hold that the high complexity of the network is a “blessing”: robustness against small perturbations can only be achieved when high-complexity, non-convex neural networks are used instead of traditional linear models. This is illustrated in Figure 3. It remains unclear whether the high complexity of neural networks is a “curse” or a “blessing” for the purpose of robust machine learning. Nevertheless, both schools agree that adversarial examples are ubiquitous, even for well-trained, well-generalizing neural networks.


AI Adoption – Data governance must take precedence

Obstacles are to be expected on the path to digital transformation, particularly with unfamiliar entities in the mix. For AI adoption, the most prevalent obstructions are: a company culture that doesn’t recognise a need for AI, difficulties in identifying business use cases, a skills gap or difficulty hiring and retaining staff and a lack of data or data quality issues. With this broad spectrum of challenges, it is worth delving into a couple of them. Firstly, it is interesting to note that an incompatible company culture mostly affects those companies that are in the evaluation stage with AI. When rephrased, perhaps it is obvious – a company with “mature” AI practices is 50 percent less likely to see no use for AI. By contrast, in a company where AI is not yet an integrated business function, resistance is more likely. Secondly, AI adopters are more likely to encounter data quality issues; by virtue of working closely with data and requiring good data practice, they are more likely to notice when errors and inconsistencies arise. Conversely, companies in the evaluating stages of AI adoption may not be aware of the extent of any data issues.



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward

Daily Tech Digest - September 19, 2019

Space internet service closer to becoming reality

Space internet service closer to becoming reality
Interestingly, though, a SpaceX filing made with the U. S. Federal Communication Commission (FCC) at the end of August, seeks to modify its original FCC application because of results it discovered in its initial satellite deployment. SpaceX is now asking for permission to “re-space” previously authorized, yet unlaunched satellites. The company says it can optimize its constellation better by spreading the satellites out more. “This adjustment will accelerate coverage to southern states and U.S. territories, potentially expediting coverage to the southern continental United States by the end of the next hurricane season and reaching other U.S. territories by the following hurricane season,” the document says. Satellite internet is used extensively in disaster recovery. Should SpaceX's request be approved, it will speed up service deployment for continental U.S. because fewer satellites will be needed. Because we are currently in a hurricane season (Atlantic basin hurricane seasons last from June 1 to Nov. 30 each year), one can assume they are talking about services at the end of 2020 and end of 2021, respectively.



Windows Defender malware scans are failing after a few seconds

The issue has been widely reported over the past two days on the Microsoft tech support forums, Reddit, and tech support sites like AskWoody, DeskModder, BornCity, and Bleeping Computer. The bug impacts Windows Defender version 4.18.1908.7 and later, released earlier this week. The bug was introduced while Microsoft tried to fix another bug introduced with the July 2019 Patch Tuesday. Per reports, the original bug broke "sfc /scannow," a command part of the Windows System File Checker utility that lets Windows users scan and fix corrupted files. After the July Patch Tuesday this utility started flagging some of Windows Defender's internal modules as corrupted, resulting in incorrect error messages that fooled admins into believing there was something wrong with their Windows Defender installation, and its updates. Microsoft announced a fix for the System File Checker bug in August, but the actual patch was delayed. When the fix arrived earlier this week, it didn't yield the expected results.


What does upstream and downstream development even mean?


If the flow of data goes toward the original source, that flow is upstream. If the flow of data goes away from the original source, that flow is downstream. ... The idea that either upstream or downstream could be superior depends on the commit. Say, for example, the developer of Application B makes a change to the application that adds a new feature unique to B. If this feature has no bearing on Application A, but does have a use in Application D, the only logical flow is downstream. If, on the other hand, the developer of Application D submits a change that would affect all other applications, then the flow should be upstream to the source (otherwise, the change wouldn't make it to applications B or C). ... An upstream flow of data has one major benefit (besides all forks gaining access to the commit). Let's say you're the developer of Application B and you've made a change to the core of the software. If you send that change downstream, you and the developer of D will benefit. However, when the developer of Application A makes a different change to the core of the software, and that change is sent downstream, it could overwrite the commit in Application B.


Soft Skills: Controlling your career

Projecting positivity is also a soft skill. The reality is that a busy IT department will achieve a lot and there is much to focus on. Of the technical people I know most are passionate about what they do. Passion drives excellence but it also has a dark side that we see manifest in various IT "religious wars". It narrows the focus, closes the mind and prevents us from acknowledging any evidence that contradicts our beliefs. Passion is also a big turn off for senior executives who tend to prefer calmness. It is difficult to get the balance right between passion & dispassion. The best advice I have been given is that it is OK to hold strong opinions but important to hold them loosely. By all means be passionate and use it to drive you to put forward the best possible case for your chosen subject but accept that others will have equally passionate views and either, or both, of you may be wrong. If you are not passionate then you won't put forward convincing arguments or test hypothesis with sufficient rigour.


Creating ASP.NET Core Application with Docker Support

Image 1
Docker contains Operating System, Source code, Environment variables (if any) and Dependent components to run the software. So, if anyone wants to run your software, they can simply take the container and get started, without putting effort to do the machine set up to make things work. ... Many times, you must have heard developers saying – it is working fine on my machine, but I don’t know what is missing on your machine or say why the same software is not working on your machine? Such discussions usually pop up during the testing phase and as my personal experience, sometimes it takes hours to identify that small missed out dependency. Here, Docker comes to the rescue. As it is containerization, each and every dependency is packed in the form of containers and is available for both Linux and Windows. Hence, everyone using the software will have the same environment. Basically, the concept of docker has completely vanished the problem of mismatch environments. Isn’t it amazing?


Why businesses would rather lose revenue than data


A big reason for cybersecurity issues is the lack of IT talent in SMBs, the report found. Half of businesses said they only provide a one-time security awareness IT training to staff. To solve for the skills gap, a third of companies (33%) said they currently outsource some of their IT activities, and another 40% said they plan to do so.  Regardless, SMBs need a plan. "With regards to addressing security concerns, it's important to have several layers of security so that there's no way an outside 'silver bullet' can penetrate a system," Claudio said. "Making sure staff are aware of potential security threats, like phishing scams, is also crucial as they will usually be your first line of defense. Patch management and vulnerability assessment are also mission critical." ...  "To support business continuity, it's important to have a great backup and disaster recovery program including off-site data copy in the event of an emergency," Claudio noted. "Again, making sure you have access to the right IT resources and skill sets by utilizing a trusted outsourced service provider is essential."


Oracle goes all in on cloud automation

Talk to the cloud: Oracle rolls out more conversational interfaces at OpenWorld 2019
“Digital assistants and conversational UI are going to transform the way we interact with these applications, and just make things a lot easier to deal with,” Miranda says. They will also enable supply chain managers to check on delivery status, track deviations and report incidents, Oracle’s goal being to enable root-cause analysis of supply chain problems via the chat interface. In HR, Oracle HCM Cloud will chat with employees about onboarding and accessing their performance evaluations, while sales staff will be able to configure quotes using voice commands, Oracle says. Oracle and Amazon are famously combative, but Oracle is starting to adopt the same terminology Amazon uses for its Alexa virtual assistant, referring to extended dialogs to accomplish a goal as “conversations” and tasks that its digital assistants can help with as “skills.” R. “Ray” Wang, founder and principal analyst at Constellation Research, says Oracle’s effort to weave AI into all its apps is paying off. ... “It’s the long-term performance improvement of feedback loops. The next best actions are more than rudimentary. Think of the Digital Assistants plus Intelligent Document Recognition, and predictive planning as all tools to help drive more automation and augmented decisions in enterprise apps.”


Strengthen Distributed Teams with Social Conversations

"Cognitive trust is based on the confidence you feel in another person’s accomplishments, skills, and reliability while affective trust, arises from feelings of emotional closeness, empathy, or friendship." In your team, trust might be developed and sustained between individuals in different ways. Some of you will be looking out for how much others fulfill their offer of help, whether they deliver their work on time, and if their work is of high quality. Meanwhile, others will be looking for a more personal or social connection, looking for things they have in common with others—which is easier to find out during real-time conversations. Getting to know each other well requires having a mental image of the person, hearing their voice, seeing their facial expressions, and online meetings can help us achieve this. In this article, I suggest two ways to use meetings to strengthen your team relationships—incorporate social conversations into your scheduled meetings and hold online meetings for the specific purpose of reconnecting as colleagues.


DevSecOps veterans share security strategy, lessons learned


Once DevOps and IT security teams are aligned, the most important groundwork for improved DevOps security is to gather accurate data on IT assets and the IT environment, and give IT teams access to relevant data in context, practitioners said. "What you really want from [DevSecOps] models is to avoid making assumptions and to test those assumptions, because assumptions lead to vulnerability," Vehent said, recalling an incident at Mozilla where an assumption about SSL certificate expiration data brought down Mozilla's add-ons service at launch. ... Once a strategy is in place, it's time to evaluate tools for security automation and visibility. Context is key in security monitoring, said Erkang Zheng, chief information security officer at LifeOmic Security, a healthcare software company, which also markets its internally developed security visibility tools as JupiterOne. "Attackers think in graphs, defenders think in lists, and that's how attackers win," Zheng said during a presentation here. "Stop thinking in lists and tables, and start thinking in entities and relationships."


Cisco spreads ACI to Microsoft Azure, multicloud and SD-WAN environments

access control / authentication / privileges / managing permissions
Key new pieces of ACI Anywhere include the ability to integrate Microsoft Azure clouds and a cloud-only implementation of ACI. Cisco has been working closely with Microsoft, and while previewing the Azure cloud support earlier this year it has also added Azure Kubernetes Service (AKS) to managed services that natively integrate with the Cisco Container Platform. With the Azure cloud extension the service uses the Cisco Cloud Cloud APIC, which runs natively in Azure public cloud to provide automated connectivity, policy translation and enhanced visibility of workloads in the public cloud, Cisco said. With new Azure extensions, customers can tap into cloud workloads through ACI integrations with Azure technologies like Azure Monitor, Azure Resource Health and Azure Resource Manager to fine-tune their network operations for speed, flexibility and cost, Cisco stated. As part of the Azure package, the Cisco Cloud Services Router (CSR) 1000V brings connectivity between on-premises and Azure cloud environments.




Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones


Daily Tech Digest - June 12, 2019

IoT security vs. privacy: Which is a bigger issue?

ringvideodoorbellpro
Predictably, most of the teeth-gnashing has come on the consumer side, but that doesn’t mean enterprises users are immune to the issue. One the one hand, just like consumers, companies are vulnerable to their proprietary information being improperly shared and misused. More immediately, companies may face backlash from their own customers if they are seen as not properly guarding the data they collect via the IoT. Too often, in fact, enterprises shoot themselves in the foot on privacy issues, with practices that range from tone-deaf to exploitative to downright illegal—leading almost two-thirds (63%) of consumers to describe IoT data collection as “creepy,” while more than half (53%) “distrust connected devices to protect their privacy and handle information in a responsible manner.” ... Police in more than 50 cities and towns across the country are apparently offering free or discounted Ring doorbells, and sometimes requiring the recipients to share footage for use in investigations. Many privacy advocates are troubled by this degree of cooperation between police and Ring, but that’s only part of the problem. Last year, for example, Ring workers in Ukraine reportedly watched customer feeds. Amazingly, though, even that only scratches the surface of the privacy flaps surrounding Ring.



Researchers crack digital safe using HSM flaw


The researchers found that the firmware built into the module was signed, but not encrypted. This meant that they could analyze how it worked, and they found that it allowed them to upload and run additional custom code. They used the software development kit (SDK) provided with the HSM to upload a custom firmware module to the unit. This gave them access to a shell inside the HSM that they could use to run a debugger and analyze the inner workings of the unit. From there, they ran a fuzzer, which sends a lot of queries to the HSM’s PKCS #11 API. PKCS #11 is a cryptographic API created by RSA. They hit the API with a large number of parameters looking for data that might throw the HSM into an unstable state. These tests uncovered several buffer overflow error bugs that they could trigger by sending the HSM certain commands. The researchers were able to write a module that they could run as unsigned custom firmware on the HSM that enabled them to dump all its secrets. They could recover keys, read secrets directly from the HSM’s memory, and dump the contents of the module’s flash storage, including its decryption key.


Combine containers and serverless to optimize app environments


Serverless is a new and misleading label for an old concept: run applications or scripts on demand without provisioning the runtime infrastructure beforehand. SaaS apps, such as Google Docs, might be considered serverless; when users create a document, they don't have to provision the back-end system that runs the application. Serverless takes this concept to application code, which is abstracted from its various infrastructure services, such as storage, databases, machine learning systems and streaming data processing. Google Cloud emphasizes that serverless functions aren't limited to event-driven code execution, but rather include many of its IaaS and PaaS products that instantiate and terminate on demand and don't require prior setup. On cloud serverless platforms, like AWS Lambda and Azure Functions, functions run code in response to an event trigger, such as an event on a message queue or notification service, and are typically used for short-duration jobs that handle tasks such as data acquisition, filtering and transformation, application integration and user input.


Ensuring trust in an age of digital banking

First, the bank needs to be sustainable. That includes following a code of conduct: integrating sustainability risk in processes and strengthening policies and enabling transparent reporting, as well as conducting the work that prevents the bank from being used for different types of financial crime. This is our license to operate. Second, we develop financial services with positive climate impact as a response to our customers’ needs. We have a very proud 10-year history of offering green bonds. Last year we launched green mortgages. In January, we launched our first blue bond [for investing in marine conservation projects], and we also offer green car leasing. We are trying to cater to customer demand. We understand that people care about what they do with their money. We have a very ambitious plan to introduce more financial solutions that capture what every single individual cares about. Today there is a good array of different products and services with positive climate impact, but it is still too little to meet the growing demand.


Hybrid Development: The Value at the Intersection of TDD, DDD, and BDD

Test Driven Development
What is the best way to tackle a large development project? You break it down into smaller, more manageable segments, or in the case of DDD - domains. When you split the project into smaller domains, you can have segregated teams handle the functionality of that domain end-to-end. And to best understand those domains, you enlist the help of domain experts; someone that understands the problem and that realm of knowledge more than anyone else.  Typically, the domain expert is not the one who is responsible for developing the solution, rather, DDD collectively is used to help bridge the knowledge gap that usually exists between these experts and the solution that is trying to be realized. Through models, context, and ubiquitous language, all parties involved should have a clear understanding of what the particular problems are and how the ensuing build will be structured. ... As the complexity of your projects grow, the only way to maintain the viability of your build and ensure success is to have your development practices grow with it.


Reaping the benefits of a strong strategy-driven business analytics IQ

Analytics IQ is a measure of an organization’s ability to leverage analytics to support business and IT objectives. Many organizations start their analytics journey eagerly, but without a clear strategy. This approach often leads to failed pilot projects, which have not provided the needed insights to answer business questions. Let us take a step back and first focus on analytics. It is easier to understand analytics when you understand the process that data goes through to become actual, actionable intelligence, rather than unusable numbers and words. I like to think about it in terms of retail. The price of an item is just plain data. However, when we add additional indicators, e.g., the price is attached to a celebrity’s merchandise, and recently, that person was involved in a controversy — then this data becomes information, something of interest to us. The information can then be used to try and predict what will happen to the price of this merchandise in the following days. That is intelligence: When we add context to information, it becomes intelligence.


Triada backdoors were pre-installed on Android devices


The story of Triada began when Kaspersky Lab researchers discovered it in early 2016, and at that time the main purpose of the Android malware was "to install spam apps on a device that displays ads," according to Google. Last week, Lukasz Siewierski, a reverse engineer on the Android security and privacy team at Google, explained that Triada was much more advanced than previously thought. "The methods Triada used were complex and unusual for these types of apps," Siewierski wrote in a blog post. "Triada apps started as rooting Trojans, but as Google Play Protect strengthened defenses against rooting exploits, Triada apps were forced to adapt, progressing to a system image backdoor." While Google added features to Android to protect against threats like Triada, the threat actors behind the malware took another unusual approach in the summer of 2017 and performed a supply chain attackto get the backdoor malware preinstalled on budget phones.


What Stands Out in Proposed Premera Lawsuit Settlement?

Technology attorney Steven Teppler points to the attention given to "fixing" the health insurer's security problems. The proposed agreement, which was filed on May 31 in a federal court in Oregon, would settle a class action lawsuit that consolidated more than 40 lawsuits filed after the data breach was revealed in March 2015 by the Seattle-based insurer. It awaits court approval. The settlement proposes $32 million for breach victims and related legal costs and would require the health insurer to invest $42 million in bolstering data security. The settlement "not only takes care of victims, but takes care of business internally at the organization to make sure there are resources devoted to fixing or mitigating the security problem, but also that there are ways to establish milestones to make sure what is promised is actually done," Teppler says in an interview with Information Security Media Group. Under the settlement, Premera would spend at least $14 million annually over the next three years on enhanced data security measures.


5 ways to achieve a risk-based security strategy


A risk-based security approach, on the other hand, identifies the true risks to an organization's most valuable assets and prioritizes spending to mitigate those risks to an acceptable level. A security strategy shaped by risk-based decisions enables an organization to develop more practical and realistic security goals and spend its resources in a more effective way. It also delivers compliance, not as an end in itself, but as natural consequence of a robust and optimized security posture. Although a risk-based security strategy requires careful planning and ongoing monitoring and assessment, it doesn't have to be an overly complex process. There are five key steps to implementing risk-based security, and though time-consuming, they will align security with the goals of the organization. Board-level support is paramount. Input from numerous stakeholders throughout the organization is essential, as risk mitigation decisions can have a serious effect on operations which security teams may not fully appreciate if they make these decisions in isolation.


Large firms look to zero-trust security to reduce cyber risk


Essentially, a zero-trust approach is about applying authentication and authorisation to ensure that all traffic within an enterprise is properly authenticated and authorised, whether it is someone coming in from the outside on a VPN connection, an application talking to another application on the network, or a user trying to use an application on the network. “The data from the survey shows many similarities between the various countries in terms of the gaps and threats that large enterprises need to deal with with respect to secure access,” said Scott Gordon, chief marketing officer at Pulse Secure. “Perhaps the most significant difference in secure access priorities was more focus on improving endpoint security and remediation prior to access in the US (57%) compared with 43% in the UK and just 31% in German, Austria and Switzerland. This trend also matches higher IoT adoption in the US, although Europe is catching up fast.” A key takeaway from this report, said Gordon, is that large organisations across Europe are dealing with an increasingly hybrid IT environment.



Quote for the day:


"Though nobody can go back and make a new beginning... Anyone can start over and make a new ending." -- Chico Xavier