July 13, 2014

Random Thoughts..
Sometimes you have to chuck it all, and start all over again, (sigh). So I did, and if you've visited this article before, you will notice that all the methods I tested previously to create random data have vanished.. but trust me, that's a good thing. So, I spent last night racking my brain, trying to figure out how to make this stronger, that is, fortify it against any attempts at reversal, and at the same time, perhaps produce even better random from an algorithm. What follows, is the best ideas I've had to date, combined in a way that I believe creates a better random generator, better then anything I have come across thus far in my research. For lack of a better name, I'll call it LOKI, because it is a tricky and unpredictable little fellow.


Can Enterprise Architecture Reinvent Itself for the Agile Movement?
Clearly, we need to re-invent Enterprise Architecture to retain its best features and avoid technical debt while adapting to the new realities of agility and the cloud. How on earth do we do that? ... Please note that I’m not advocating any abdication of EA responsibility. We need a clear central vision of what we’re building to avoid technical debt and ensure that systems mesh together well. However, the way we accomplish this must change with the times. We need to be able to clearly and quickly communicate our vision and become active evangelists for it, contributing to real work as we encourage our team members to do the right things.


“Governance Now” for Financial Reference Data Governance
Reporting and analytic functions are often hostage to the integrity and long-term quality of reference data. The consistent use of valid, accurate reference data values is critical to reporting because these values often drive reporting dimensions for sorting, grouping sub-totaling and other calculations. Long-term consistency is critical for basic trending and period over period comparisons. Multi-dimensional and new forms of analytics rely on consistent accurate values to produce fully populated, sorted and calculated views of enterprise performance. Gaps in dimensional stability related to reference data quality impairments have driven major product categories in the software business to support multiple dimensional analysis (anyone here remember Razza?).


Quantifying the Impact of Agile Software Development Practices
The flexibility designed into the tool permits integration into a variety of workflows – rather than forcing the user to conform their workflow to the tool design. This flexibility turns out to be a significant source of variation for a research effort like ours because it limits the number of assumptions we can make about the workflow generating the data. As a result, the level of variation present in the data was an obstacle – one that cannot be overcome without information on patterns of practice (workflow design) that allow us to make valid inferences about patterns of team behavior that correspond to patterns of performance in the data.


The Value of Information Governance: Finding the ROI
Information governance is the set of multi-disciplinary structures, policies, processes and controls implemented to manage information. Gartner states that "the goal of information governance is to ensure compliance with laws and regulations, mitigate risks and protect the confidentiality of sensitive company and customer data." More than another word for "records management," information governance aims to support an organization's regulatory, legal, risk, environmental and operational requirements.


Out-of-band Initial Replication (OOB IR) and Deduplication
Ah… so if you were expecting that the VHD data would arrive into the volume in deduplicated form, this is going to be a bit of a surprise. At the first go, the VHD data will be present in the volume in its original size. Deduplication happens as post-facto as a job that crunches the data and reduces the size of the VHD after it has been fully copied as a part of the OOB IR process. This is because deduplication needs an exclusive handle on the file in order to go about doing its work. The good part is that you can trigger the job on-demand and start the deduplication as soon as the first VHD is copied. You can do that by using the PowerShell commandlet provided:


The Difference Between Tools and Solutions for Data Governance
The point is, any tool can take on a job and do as it’s told to do – but who sets the policy and strategic planning to instruct those tools? Data governance is a journey, which not only requires tools, but also an intuitive master plan that incorporates process, objectives and measurable results. It follows a path that should be guided by best practices. Solutions developed and lessons learned should be reusable and applied to reduce rework and maintain consistency in future projects. Because data governance is data-intensive, attention should be paid to assessing which data is or is not valuable to the business. This can be defined as “field value” versus “context value.”


Big Data Governance Software: Sensitive Data
The pivotal advantage to employing Dataguise’s Big Data Governance software (which is effective on traditional data as well) is that it expedites and automates the process of implementing governance rules for sensitive data that is potentially discoverable. The traditional manual process involves creating governance policies and employing IT to modify and search through information systems to find and appropriately tag sensitive data. When such information involves Big Data (with their rapid velocity and myriad forms) found in time-sensitive financial and health care industries, such a process swiftly becomes outdated.


Cindy Walker on Data Management Best Practices and Data Analytics Center of Excellence
There are several trends taking place in the field of semantics. One trend is the industry-focused collaboration efforts to develop and link domain-specific standard ontologies, such as the Financial Industry Business Ontology (FIBO), to facilitate information sharing and regulatory reporting. Use of standard ontologies can help organizations connect disparate data sets and can enable semantic queries across the web and data sharing with internal and external stakeholders easily (without restructuring the data or developing point to point interfaces.) FIBO is being developed in phases by volunteer contributors in the financial services industry (with some financial regulator participation) unbestyder the authority of the Object Management Group.


Implement Observer Pattern in .NET (3 techniques)
The Observer Pattern a.k.a. Publisher-Subscriber Pattern. You'll find various articles on how to implement the observer pattern in .NET framework using the assemblies provided in the framework. BUT, .NET framework has evolved over the years and along with that it has also been providing new libraries for creating the Observer Pattern. To give you all a brief idea of the Observer Pattern, it defines a one-to-many dependency between objects (Publisher and multiple Subscribers) so that when one object (Publisher) changes state, all the dependents (Subscriber) are notified and updated automatically.



Quote for the day:

"The most rewarding things you do in life are often the ones that look like they cannot be done. " -- Arnold Palmer

No comments:

Post a Comment