Successful software projects please customers, streamline processes, or otherwise add value to your business. But how do you ensure that your software project will result in the improvements you are expecting? Will users experience better performance? Will the productivity across all tasks improve as you hoped? Will users be happy with your changes and return to your product again and again as you envisioned? You don’t find answers to these questions with a standard QA testing plan. Standard QA will ensure that your product works. Usability testing will ensure that your product accomplishes your business objectives. Well planned usability testing will shed a bright light on everything you truly care about: workflow metrics, user satisfaction, and strength of design. How do you know when to start usability testing? Which usability tests are right for your product or website? Let’s examine the six types of usability testing you can use to improve your software.
Microsoft's president responded specifically to those allegations in his blog post, first touching on Microsoft's work with ICE, a law enforcement agency that is part of the U.S. Department of Homeland Security. "We've since confirmed that the contract in question isn't being used for facial recognition at all. Nor has Microsoft worked with the U.S. government on any projects related to separating children from their families at the border, a practice to which we've strongly objected," Smith said. Instead, the contract involves supporting the agency's "legacy email, calendar, messaging and document management workloads," Smith said. But at what point should an organization put down its foot with a federal agency operating in a manner to which at least some of its employees object? "This type of IT work goes on in every government agency in the United States, and for that matter virtually every government, business and nonprofit institution in the world," Smith said. "Some nonetheless suggested that Microsoft cancel the contract and cease all work with ICE."
IT platforms are constantly under attack from all sorts of possible malicious efforts, ranging from open port sweeping to intrusion attacks and denial-of-service assaults, such as the sophisticated distributed DoS move that took down Dyn in 2016. Historically, IT and security professionals identify that an attack is happening and then simply apply a defined means to deal with the problem. With heuristic automation in the mix, automation becomes responsive to changes in the IT environment caused by the attack. Instead of applying a simple and often ineffective fix, a heuristic IT management system looks at the IT deployment as an overall entity and applies the right fix for the situation. In this example, heuristic automation could change traffic patterns to offload incoming streams to a separate area of the platform and block certain traffic from access to those streams. It also could reallocate running workloads to a public cloud instead of the private cloud, or vice versa, to prevent service disruption. Provide the heuristics engine with information about possible attacks, and it can harden the platform in real time to prevent them from ever happening.
Anaconda, the Python language distribution and work environment for scientific computing, data science, statistical analysis, and machine learning, is now available in version 5.2, with additions to both its enterprise and open-source community editions. ... This enterprise edition of Anaconda, released this week, adds new features around job scheduling, integration with Git, and GPU acceleration. Earlier versions of Anaconda Enterprise were built to allow professionals to leverage multiple machine learning libraries in a business context—TensorFlow, MXNet, Scikit-learn, and more. In version 5.2, Anaconda offers ways to train models on a securely shared central cluster of GPUs, so that models can be trained faster and more cost-effectively. Also new in Anaconda Enterprise is the ability to integrate with external code repositories and continuous integration tools, such as Git, Mercurial, GitHub, and Bitbucket. A new job scheduling system allows tasks to be run at regular intervals—for instance, to retrain a model on new data.
With such incredible off-premise computing momentum, the potential impact of a wide-spread outage from a major data center provider grows daily. Enterprises are acutely aware of how outages could impact their mission-critical data – security was listed as a major concern for 77 percent of cloud users in RightScale’s report. Understandably, data center owners and operators have placed resiliency at the top of their priorities and turn to third-party certifiers to help address the most common root causes of outages, including human error, software issues, network downtime, and hardware failure with a corresponding failure of high availability architecture. However, there are limited offerings for data center operators to get a holistic audit of all factors that contribute to the resiliency of their services. We’ve been hearing directly from providers that existing offerings have not kept up with the pace of change in the industry. Incumbent programs will sometimes require a facility to be unnecessarily over-engineered. It’s not cost effective, and takes the focus away from what truly matters to enterprise users: security and reliability.
While the $35 Pi is by no means a computing powerhouse, in recent years enthusiasts have begun harnessing the power of armies of the tiny boards. There's a wide range of Pi clusters out there, from modest five-board arrangements all the way up to sprawling 750-Pi machines.If you're curious to find out more, then here's five Pi clusters built in recent years, starting with some you can try yourself and moving on to the Pi-based supercomputers being built by research labs. ... The Los Alamos National Lab (LANL) machine serves as a supercomputer testbed and is built from a cluster of 750 Raspberry Pis, which may later grow to 10,000 Pi boards. According to Gary Grider, head of its LANL's HPC division, the Raspberry Pi cluster offers the same testing capabilities as a traditional supercomputing testbed, which could cost as much as $250m. In contrast 750 Raspberry Pi boards at $35 each would cost just under $48,750, though the actual cost of installing the rack-mounted Pi clusters, designed by Bitscope, would likely be more. Grider highlights power-efficiency benefits too, and estimates that each board in a several-thousand-node Pi-based system would use just 2W to 3W.
"LabCorp immediately took certain systems offline as part of its comprehensive response to contain the activity," the company said in its SEC filing. "This temporarily affected test processing and customer access to test results on or over the weekend. Work has been ongoing to restore full system functionality as quickly as possible, testing operations have substantially resumed [Monday], and we anticipate that additional systems and functions will be restored through the next several days." Some customers of LabCorp Diagnostics may experience brief delays in receiving results as the company completes that process, LabCorp added. "The suspicious activity has been detected only on LabCorp Diagnostics systems. There is no indication that it affected systems used by Covance Drug Development," a research unit of LabCorp, the company said. "At this time, there is no evidence of unauthorized transfer or misuse of data. LabCorp has notified the relevant authorities of the suspicious activity and will cooperate in any investigation."
An ICS is a key underlying element of the OT world. According to the National Institute of Standards and Technology report NIST SP 800-82 R2, "Guide to Industrial Control Systems (ICS) Security," ICS is a "general term that encompasses several types of control systems, including supervisory control and data acquisition (SCADA) systems, distributed control systems (DCS), and other control system configurations such as skid-mounted Programmable Logic Controllers (PLC) often found in the industrial sectors and critical infrastructures." ICS is used in the industrial, manufacturing and critical infrastructure sectors. For instance, railway controls are a type of SCADA. A street light controller may be a PLC, but it can also be part of a SCADA system. Finally, an ICS includes combinations of control components, including electrical, mechanical, hydraulic or pneumatic, that act together to achieve an industrial objective, such as manufacturing, transportation, or the distribution of material or energy.
A good example for generating test cases can be the use of an evolutionary algorithm in testing automated parking on a car. You can imagine that with automatic parking, the amount of situations the car can be in are nearly infinite. The starting position may vary with surrounding cars positioned in many different ways, or other attributes that cannot be hit are around the car. The automatic parking function may not hit anything when parking and the car needs to be parked in a correct way. In this case we can generate a series of starting positions that the automatic park function needs to tackle. Ideally this is virtual so we can run a lot of tests quickly. It could be physical tests of course, but it would take more time in test execution. We need to define a fitness function that is evaluated with each test execution run. In this case it would be a degree of passing for the parked car. You can imagine some points for not hitting anything, and points for how well the car is parked in the end. Now we generate a series of tests and run them. Each outcome is evaluated and assigned a total points value.
Quote for the day:
"Strength lies in differences, not in similarities." -- Stephen R. Covey