“The ‘best’ data depends on its source and purpose,” Jonas writes. “While a company may have employee data in different systems, like IT, HR, Finance etc., the employee name and address maintained by the payroll system is probably the best one to use for tax filing.” That doesn’t mean Jonas thinks organizations should not try to reconcile data plurality. But instead of the traditional “merge-purge” technique that involves massive batch jobs that compare new data against the old data, Jonas thinks we are better of using an “entity resolution system.” “Entity resolution systems generally retain every record and attribute, each with its associated attribution,” he writes. “Because entity resolution systems have no data survivorship processing, there is no chance future relevant data will be prematurely discarded.”
While much has been written about the strengths and weaknesses of the technology, little data has been published to show how widely agile methods are used. This paper corrects that by providing data from our databases for public consumption. ... Some of these organizations are offshoots of the 120 firms and government organizations from which we have received data. Figure 2 summarizes which agile methodologies are in use by these organizations. As many said that they were using a hybrid approach, i.e., one that combined agile with traditional concepts, we have included their response and categorized them as either hybrid or hybrid/lean
"The ability to slice and dice has been around for a while. It's more exploratory now. You have a lot of data sources, so finding a needle in a haystack boils down to being interactive," said George Ramonov, founder and CTO at meeting planner provider Qurious.io, in an interview. "Now that we have cross-functional teams, it's important to be able to share visualizations embedded in a website or app to allow sharing without all the extra time of putting together an email." Regardless of how large or small an audience is, good data visualizations speed understanding. Bad visualizations cloud the issues. Here are six ways to best leverage the graphical presentation of data.
Unfortunately, blocking four out of five attacks still leaves open the possibility that a substantial number of attacks might succeed. And today, it’s more a matter of when rather than if you will, eventually, be successfully attacked. What happens then? Even well prepared companies may not know immediately that they have been breached. But those that have prepared for such an event will be much better off than those that have not. Just as conducting fire drills can save lives in the event of a real fire, preparing for the aftermath of a cyber attack can make an enormous difference in how quickly your company gets back on its feet and how well officers and board members do in the limelight after a major breach becomes public.
If Company A builds protocol Cat, and Company B builds protocol Dog, how do they get those protocols to talk? They can't! It's just like someone who speaks Japanese and someone else who speaks English would need a translator to communicate effectively. By embracing open standards, we can make pieces of network equipment talk to each other, servers, laptops, phones, etc. If we didn't promote open standards, we would be locked into solutions where everything is controlled by a single vendor from A to B. We have many customers at Cumulus Networks that run multi-vendor environments, and open standards are not just encouraged, they're crucial.
It describes each step in-depth and includes techniques, example worksheets, and materials that can be used during the overall analysis and implementation process. And it provides insights that are derived from the real-world experience of the authors. This paper is intended to serve as a guide for readers during a process-improvement project and is not necessarily intended to be read end-to-end in one sitting. It is written primarily for clinical practitioners to use as a step-by-step guide to lean out clinical workflows without having to rely on complex statistical hypothesis-testing tools. This guide can also be used by clinical or nonclinical practitioners in non-patient-centered workflows. The steps are based on a universal Lean language that uses industry-standard terms and techniques and, therefore, can be applied to almost any process.
Machine data, meanwhile, is a record of actions that have already occurred, such as call logs or EHR access time stamps. This data is more or less static, and while it may recount the activities of users, it is automatically created by IT systems without much human intervention. When subjected to more sophisticated analysis, the millions of data points in machine data can help a healthcare organization identify a possible breach or chart how long a clinician takes to see her patients, and even aid understanding of how patients flow into the emergency room or how often a nurse updates vital signs. Correctly and efficiently analyzing these types of big data is key for clinical and business intelligence activities, and can help healthcare organizations understand how their IT infrastructure can enable workflow improvements
The cloud isn’t perfect. There are still outages, challenges around replicating pieces of an environment, and even confusion around all the different kinds of services that cloud can provide today. Fortunately, the entire cloud model is becoming a bit easier to understand and deploy. Why? There are simply more use cases for such a powerful architecture. Businesses of all sizes are quickly realizing that their direct competitive advantage may very well revolve around the capabilities of the cloud. However, with that in mind, what should organizations of various sizes do about physical resource requirements? What about infrastructure expansion? Most of all, what are the limitations of your cloud?
COSO’s failure is due primarily to its narrow focus on internal controls as a risk management tool. Internal controls should have been considered one leg of a four-pronged approach to a comprehensive risk management framework. Fundamentally, internal controls should be considered one of the foundational components of enterprise risk management. What is missing in COSO and broadly across risk management are the other tools needed to execute ERM. Risk management must include mechanisms to measure and quantify real risks. The rise of quantitative analysts is the recognition that risk management is measureable and not simply assessed through the qualitative assessments advocated in COSO.
It has become increasingly clear to me over the past few years working with many different organisations that the idea of a single, identifiable 'DevOps culture' is misplaced. What we've seen in organisations with effective or emerging DevOps practices is a variety of cultures to support DevOps. Of course, some aspects of these different cultures are essential: blameless incident most-mortems; team autonomy; respect for other people; and the desire and opportunity to improve continuously are all key components of a healthy DevOps culture. However, in some organisations certain teams collaborate much more than other teams, and the type and purpose of communication can be different to that in other organisations.
Quote for the day: "Keep your fears to yourself, but share your courage with others." -- Robert Louis Stevenson