July 17, 2014

Total internet failure: are you prepared?
“Because there has not been a significant failure of the internet to date, organisations never consider that as a possibility,” said Bonner. Yet organisations have at least one backup electricity supply even though the energy industry is heavily regulated and well managed, and reliable power supplies are usually supported by a contract. “But when it comes to the internet, which has no clear oversight or governance, organisations have no backup plan and nobody seems to be worrying about a major internet outage,” Bonner pointed out.


Without the cloud, Microsoft may lose grasp on the enterprise
"Microsoft recognizes that this is going to have a huge effect on its partners," he told Computerworld. "They need to show that their partners can make money and be successful with this. Microsoft's success is their partners' success and vice versa." "Cloud is essentially the new platform for Microsoft," said Mahowald. "I think it's more important than mobile, big data or social. What they had in Windows, they have to replicate [in the cloud] or they lose the franchise. If their old platform doesn't matter any more, then Microsoft has lost the software lock-in that is their crown jewel."


A New Dawn for System Design
Agile or rapid methods might be great for fast, iterative software development, but invite a designer into the exercise and you’ll learn how the design process allows for much more effective exploration and discovery. This isn’t the “design thinking” fad. This is “design doing” — technologists, business analysts, designers, researchers, executives and rank-and-file staffers defining possibilities together. They’re focused equally on the people that we need to perform, the technology we can deliver, and the business that must be served.


Should online accounts die when you die?
"This is something most people don't think of until they are faced with it. They have no idea what is about to be lost," said Karen Williams of Beaverton, Oregon, who sued Facebook for access to her 22-year-old son Loren's account after he died in a 2005 motorcycle accident. Facebook and other tech companies have been reluctant to hand over their customers' private data, and many people say they wouldn't want their families to have unfettered access to their life online. But when confronted with death, families say they need access to settle financial details or simply for sentimental reasons. What's more, certain online accounts can be worth real money, such as a popular cooking blog or a gaming avatar that has acquired certain status online.


The One Thing CIOs Want, More than Anything
Rather than consuming more and more IT resources on inefficient legacy server platform operation and management, virtualization provides automation that typically delivers increased application provisioning speed and improved infrastructure optimization results. However, while virtualization has offered a compelling way to consolidate expensive hardware and enhance utilization, the benefits can diminish over time. The first step is to optimize all the workloads that aren't already virtualized -- Linux workloads, for example. Once virtualization is ubiquitous in your datacenter, how can you achieve even greater financial savings and improve performance?


Java Update: Patch It or Pitch It
The trouble with Java is that it has a very broad install base, but many users don’t even know if they have it on their systems. There are a few of ways to find out if you have Java installed and what version may be running. Windows users can click Start, then Run, then type “cmd” without the quotes. At the command prompt, type “java -version” (again, no quotes). Users also can visit Java.com and click the “Do I have Java?” link on the homepage. Updates also should be available via the Java Control Panel or from Java.com. If you really need and use Java for specific Web sites or applications, take a few minutes to update this software.


Microsoft security critic Aorato in Redmond giant's buyout sights
The exploit is less of a problem for pure-Kerberos environments (but even there, said Be'ery, there's a potential problem, because the user's credentials stay alive for as many as ten hours, giving hacker plenty of time to get the hashes), but turning off NTLM authentication is impractical, as it would lock users out of many legacy services. "We've consulted with a lot of clients who raise the same idea as a solution, but after examining their deployment, we always come to the conclusion that it's impractical, if not impossible, without a major investment in upgrading everything."


Intel thinks your next PC should be thin, light and wire free
The company is developing chips and wireless technologies to meet those goals, with the first fruits of that development available starting this coming year-end holiday said, said CEO Brian Krzanich on an earnings call this week in which he discussed the company's vision of future PCs. About 600 million PCs worldwide are more than four years old and due for upgrades, so the development efforts come at a fortuitous time. Tablets thinner and lighter than the iPad that could be used as full PC replacements will be on store shelves by the end of this year, Krzanich said.


SCRUM explained
As mentioned before, SCRUM throws the problems at your face, giving you no solution for that. This is interesting because since paper-oriented processes hide these problems, specially when they were people related ones. Whenever SCRUM throws you these matters at your face, you have two options: either recognize and face them or blame the methodology you are following, ignoring what is really in front of you. This usually is related, as mentioned, to people issues. Maybe members of the Team do not get along, maybe you have a client who is too hard to deal with, maybe the ScrumMaster or the Product Owner are not adequate.


Why big data is crucial for innovation and competition
"There's a reason everything is compared to sliced bread," said Carl Frappaolo, innovation expert and director of Knowledge Management at FSG, a nonprofit consulting firm specializing in strategy, evaluation and research in the post. "It's the most successful innovation yet. The simple act of slicing bread for the convenience of customers led to huge and profitable changes in the baking industry." The thing is that we've collectively done much of the easier work to identify innovations like slicing bread. It's harder to see what might be improved upon now. That's where big data comes in--it enables you to see what and where to innovate



Quote for the day:

"Our business in life is not to get ahead of others, but to get ahead of ourselves." -- E. Joseph Cossman

July 16, 2014

Cloud Governance: Something Old, Something New, Something Borrowed…
Making matters worse, SOA governance tools are often missing in the Cloud Computing environment. There’s no central point for a Cloud consumer / developer to view the Services and associated policies. Furthermore, design-time policies are easily enforceable when you have control over the development and QA process, but those are notoriously lacking in the Cloud environment. The result is that design-time policies are not consistently enforced on client side, if at all. Clearly, SOA governance vendors and best practices need to step up to the plate here and apply what we already know about SOA registries/repositories and governance processes to give the control that’s needed to avoid chaos and failure.


Aligning Agile with Zachman EA framework
On the flip side - Agile being an umbrella for multiple known methodologies, i.e. Scrum, Kanban, Scrumban etc, I believe it could be moulded to meet the organization requirements and just like we saw how Zachman framework could be custom-built to suit to the organizational expectations and culture. Here we have covered just the user-story aspect of the agile process (aligned with Zachman model), and in the next part of this blog series I am going to showcase how one could make the overall SDLC process iterative and align TOGAF with Agile SDLC, at EPICS and Sprint level.


CIO Meets Mobile Challenges Head-on
"Android is a more challenging operating system to secure for the enterprise than iOS because of its fragmentation," says Ojas Rege, MobileIron's vice president of strategy, adding, "Deploying Android successfully requires us to make as much of the complexity and variability as possible invisible to our customer. We do expect that Google's increasing focus on enterprise Android combined with our engineering investments will continue to expand the business capabilities of Android and continue to make it easier to deploy."


Configuration management, IT asset management need to be integrated
The asset management process is actually a longer process than configuration management. If you think of the sub-processes, actually it [asset management] goes from plan through procure, receiving the asset, deploying the asset, then operating and optimizing the asset before eventually you move to decommission and dispose. As I said, the main parts of the asset management lifecycle are deploy and operate, then optimize, which is where the configuration process comes in. And with configuration, you have planning and management of the configuration item. You need to be able to identify the configuration item, control it, report on the status of the configuration item, and then you will do some audit and verification. So the processes have so many interfaces they have to operate in harmony.


Do you want power or influence?
The problem with positional power is that there actually are few of the really significant roles at the top of any organizational structure. It’s clogged up there. But that does not mean we are out of luck in terms of power if we have not had the fortune or good luck to step into these positions of power and the influence that goes with it. There are also sources of power that are personal and can provide both power and influence if we cultivate and use them well:


Hacker mindset a prereq for security engineers, says Markley CTO
A key theme at this year's MIT Sloan CIO Symposium on the digital enterprise was that the customer comes first for IT, no matter what kind of business a CIO is in. It follows that customer data is among an organization's most valued assets. Protecting customer data in today's digital enterprise, however, can no longer be relegated to your run-of-the-mill security engineers, according to Patrick Gilmore, CTO at data center services provider Markley Group. For Gilmore, candidate prerequisites include a high degree of paranoia and a hacker's mentality.


Wearables: Are we handing more tools to Big Brother?
"This is a massive violation of our right to keep sensitive information private," she said, adding that, "any kind of mental health diagnosis can ruin your life." Pam Dixon, founder and executive director of the World Privacy Forum (WPF), agrees. She is one of numerous privacy advocates who point out that most fitness trackers are currently exempt from any regulation -- they are not covered by HIPAA since they are consumer devices that have not been furnished or prescribed by a health-care provider.


Why Test in the Cloud?
First and foremost you want your cloud-based test management to enhance workflow and streamline processes for greater efficiency. One of the first things worth considering is integration. Can you integrate your existing bug-tracking software? Are there any plug-ins or browser-based tools that can help generate logs and record screenshots to create clear and concise bug reports? Can you easily import and export documents, deliverables, log files, images and other files? Can you set permissions levels, make bug status changes, and see real-time updates? Does it support automated test scripts? It's also important to think about versioning and tracking. Every action should be traceable and the ability to revert when something needs to be rolled back can prove to be a real time-saver.


Boost your security training with gamification -- really!
Building awareness of physical security was also part of the effort at Salesforce, which has 13,000 employees. A campaign to test "tailgating" (when an unauthorized person sneaks through a secured door by following immediately behind an authorized person) drew 300 volunteers who were rewarded if they successfully slipped through a door and took something. Generally, before security training, 30% to 60% of users will fall victim to a fake phishing email, says Lance Spitzner, training director at the SANS Institute, a security training vendor. After training and six months to a year of a gamification program, the rate can fall to 5%, he says.


Google’s Container Tool Attracts Support From Microsoft, IBM, and Others
Hölzle pointed out that Microsoft will work to make Kubernetes successful in its Azure cloud; RedHat plans to add support to its hybrid cloud product; IBM will contribute to Kubernetes and Docker while trying to establish a governance model; Docker pledges to align Kubernetes with its own similar service called libswarm; CoreOS will ensure that Kubernetes works with its Docker-centric operating system; Mesosphere says that they’ll integrate Kubernetes with their own management tool called Mesos; and SaltStack will make Kubernetes part of their configuration management toolset.



Quote for the day:

"Don't worry about people stealing your ideas. If they're any good you'll have to ram them down people's throats" -- H. Aiken

July 15, 2014

GraphLab thinks its new software can democratize machine learning
Given the current state of affairs, though, I asked Guestrin whether it’s really possible for a software product to democratize machine learning the way he hopes Create can do. “That’s a yet-to-be-answered question, because nobody has yet done it,” he said. So far, he acknowledged, most machine learning research has focused on one-off systems and “my curve is better than your curve” demonstrations. He thinks GraphLab Create can reach 80 to 90 percent of use cases because the focus from the beginning was on usability and robustness. There are other commercial machine learning products on the market, including Skytree, but Guestrin said the big difference between them and GraphLab is in the barrier to actually using the product.


New Strategies and Features to Help Organizations Better Protect Against Pass-the-Hash Attacks
Given that organizations must continue to operate after a breach, it is critical for them to have a plan to minimize the impact of successful attacks on their ongoing operations. Adopting an approach that assumes a breach will occur, ensures that organizations have a holistic plan in place before an attack occurs. A planned approach enables defenders to close the seams that attackers are aiming to exploit. The guidance also underscores another important point - that technical features alone may not prevent lateral movement and privilege escalation.


Executive Beware: The SEC Now Wants To Police Unethical Corporate Conduct
Clearly corporate bribery, insider trading, and intentional manipulation of financial results are both unlawful and unethical, but what about lesser misconduct? If a permissible, but highly aggressive, accounting treatment is employed to enhance financial reporting results, is the result legal but unethical because of the underlying motivation? Moreover, even if the acts in question technically comply with the law, does the unethical behavior violate the spirit of the law and expose the individual or entity to the unwanted consequences of a government investigation or shareholder suit?


Analytically speaking, Dell delves into the Internet of Things
One of Dell's main thrusts in this area is to round out their analytics platform and offering with Statsoft's analytics software. The short-term plan is to build a data factory. With a data factory, Dell's products can bring in all of your data sources, including IoT, and develop actions around the analytics that you gather. Because as we all well know, data is just noise unless you can do something useful with it. Dell plans to help you do something relevant with your data by using its other products in conjunction with Statsoft's STATISTICA software. John confirmed that there's a lot of talk within Dell surrounding IoT, predictive analytics, and product integration. You can look forward to some announcements related to cloud services and predictive analytics later in 2014.


Oracle hopes to make SQL a lingua franca for big data
Oracle over time will add support for using Big Data SQL with other hardware systems it sells, according to Mendelson. The software is set for general availability within the next couple of months, with pricing to be announced at that time. Big Data SQL isn’t an attempt to replace the SQL engines already created for Hadoop, such as Hive and Impala, which Oracle will continue to ship with the Big Data Appliance, he said. “We’re really solving a wider problem.” One big challenge facing data scientists is simply the overhead of moving data among systems, he said.


Data – the Next Big Thing for Utilities
On average, meter readers, for example, once collected one reading per customer per month. Today, utilities have access to an almost overwhelming amount of data from both meters and other smart endpoints on their infrastructure, as well as external sources such as news and weather aggregators. To realize maximum value of all the data their communication system delivers, utilities need data analytics. The first thing utilities should understand when adopting data analytics is that the majority of these applications are communication vendor agnostic. However, a fixed-base communication network with dedicated spectrum and the ability to prioritize incoming data is more efficient and reliable.


Orchestrate cloud service makes using many databases easy
If we fast forward to today, there are many more types of database engines that support many different types of data. Building an application that accesses and updates data using many different data sources can be quite a challenge for a developer. Furthermore, as each of the sources evolves over time, that application must be updated or things don't work any more. Orchestrate hopes that by inserting their middleware into the mix of technology that developers are using, they can use the API to access different data sources rather than be forced to develop their own ETL code.


Cloud Protection: How to Avoid Emergency-Related Outages
In an age of advanced technology and many excellent preemptive tools and systems available, it’s hard to imagine an entire data center losing power. However, it was only two years ago when Hurricane Sandy hit the East Coast wiping out data centers between Virginia, New York, and New Jersey causing them to lose public power and go dark for days. For government agencies or large enterprise organizations that use internal data centers to house their applications, public multi-tenant clouds offer a lower-cost, easy to deploy disaster recovery/continuation of operations (DR/COOP) solution. The following steps can help these data centers plan and execute effectively with minimal to no disruption in the production environment.


DBAccess: a Thread-safe, Efficient Alternative to Core Data
DBAccess claims to provide three key benefits over Core Data: Thread-safety; High performance and support for query performance fine tuning; Event model that enables binding data objects to UI controls and keep them updated with changes made in the database. DBAccess can be used and distributed freely. Its latest version includes a few improvements such as support for ASYNC queries, better performance with large result sets, and reduction in memory usage in queries with many columns. DBAccess proposes a very simple usage model. A persistent object declaration is very similar to a Core Data's:


Three Questions To Help Cultivate Your Leadership Style
Fortunately, a wise senior manager took me aside and suggested I would be more effective over the long haul if I quit acting like a machine and started acting like a human who cared about people at least as much as he cared about results. He suggested that I was leaving, money, performance and the growth of people on the table, and he challenged me to think long and hard about the type of leader I wanted to be. I am grateful to this day for that leadership wake-up call. Over the months following the “machine” comment, he regularly challenged me with a number of provocative questions that ultimately shifted my focus from results at all costs to results through supporting and developing others. How will you answer these questions?



Quote for the day:

"Not all problems have a technological answer, but when they do, that is the more lasting solution" -- Andy Grove

Juy 14, 2014

Revamping your insider threat program
"A crescendo of discussions is happening in boardrooms everywhere about the impact an insider could have on corporate assets," says Tom Mahlik, deputy chief security officer and director of Global Security Services at The MITRE Corporation, a government contractor that operates federally funded research and development centers. The Washington Navy Yard incident cost 12 people their lives; the full impact of the WikiLeaks and Snowden data releases cannot yet be quantified.


Red Hat CEO Whitehurst on VMware, OpenStack and CentOS
Whitehurst chafed when I asked whether Red Hat should be seen as a Linux or cloud company. Red Hat is an open source company and that's the compass that leads every move it makes. "The way our DNA works is that we look at the most innovative open source projects that are applicable to the enterprise," he said. That DNA has led Red Hat to the cloud, but it has led to Linux as an OS to middleware to platform as a service and virtualization. In the future, open source will lead Red Hat to networking. "Open source gives us brand permission to enter a ton of categories," said Whitehurst.


Database startup TempoDB becomes TempoIQ, refocuses on sensor analytics
Andrew Cronk, TempoIQ’s co-founder and CEO, says the company started with capturing time-series data for things like sensors and connected devices because it was the hardest problem to solve in those spaces, where companies were historically trying to contort that data into relational databases or other database systems not designed for that use case. And although it was reasonably easy to attract individual developers with the notion of a time-series database service (TempoIQ is a cloud service), demands began to change as the company tried scoring bigger deals such as Silver Spring Networks.


Are You Ready For the Coming Decade of Change and Innovation?
Friedman pointed out in his keynote that the biggest change of the 21st century was the merging of IT and globalization – how the world is now so completely hyper-connected (actually hyperactively connected) and has nearly the same computing power and technology tools, and Internet access available to individuals that used to only be accessible to private enterprise and governments. He called this a “Gutenberg-scale” moment — really, really big. The world’s individuals can now compete, connect and collaborate with one another like never before. But imagine that you have billions of competitors, regardless of your status or profession, because that’s where it’s headed (if not already there)


Why Your Culture Problem Is About To Get Much Worse
The evidence of culture erosion in the workplace is substantial. According to a Gallup report on The State of the American Workplace, a full 70% of employees (mostly white collar) are “not engaged” or “seriously disengaged” from their job. The results speak to culture—or the lack of one—because Gallup measures engagement based on participants who rate their boss and their workplace on the following types of statements: ... It doesn’t take a genius to realize that good communication can result in higher levels of engagement on each of these elements.


Rethinking Thinking About Strategy
What Christensen, and others before him, have seen is that industry change is continuous, not episodic. This is critical to innovative strategy thinking. Embracing change before it is required has been a message my IMD colleague Peter Killing has advocated: initiate change when resources are abundant and people feel good, rather than when resources are scarce and people are afraid. Industry evolution can be better portrayed as a recurring series of punctuated equilibriums, where ideas take hold, a new industry is born, incumbent champions evolve and prosper, and then they – almost all at the same time – are "disrupted" by outsiders who have no legacy to protect and who are more agile in addressing nascent customer desires.


The Business Designer and the Architecture
Consultants do adopt to a degree the language of their client to smooth communications. But that does not mean that they can do that for terms that denote new disciplines such as Business and Enterprise Architecture. Or take advantage of the appeal of such terms to rebrand their good old occupations. Moreover, the term architecture sits well with the business because we all relate to the construction and urban architecture while, at the same time, Enterprise Architecture is well known today to the business even if does not inspire confidence.  And while "Enterprise Architecture" may have IT connotations, the term "architecture" does not.


Open Group goes into mining
Has Open Group absorbed this "reference model" whole, undigested, as it has done with Archimate? To me, Open Group looks now more like an anaconda that swallows whole its prey only to digest it later, if at all. And the EM looks like a quick add-on aimed to quell the unrest on the business oriented approach TOGAF promised for some time now. The problem is that the EM model cannot be generalised. The EM model is too specific to be of use to any other industries. What would "Discover", "Rehabilitate", "Brown", "Green fields"... mean to other industries? And, as an observation, the few horizontal process bands, that is Control, Measure... seem to have no relationship to the entities (read boxes) on the horizontal.


There's still a security disconnect on BYOD
More than half of the employees surveyed feared that the company would gain access to their personal data via corporate security tools. Some 46% of workers said they feared personal data would be lost if they left the company. The same number feared a company-mandated security app installed on personal devices would let managers track their location.Nearly half of worker said they would stop using personal devices at work if they were required to install a company-mandated security application. The surveys show the need for better communication between IT organizations and workers on BYOD security, Malloy said.


Chief digital officers are a blessing and a curse for CIOs
Adding a CDO to the mix could be a blessing for CIOs. According to the speakers at the conference, CDOs tend to know code, understand the importance of clean data and get technology. Because CDOs want to create a seamless, user-friendly customer experience, they're natural partners for CMOs. They're also a natural ally for CIOs. They understand, for example, that CMOs are powerful, necessary and "don't understand the first thing about technology," said Jonathan Sackett, CEO of the marketing and advertisement firm MashburnSackett in Chicago. But CDOs could also be a curse for the CIO: CDOs may know IT but their work isn't about keeping the lights on, obsessing over data governance or finding ways to cut costs. Chivers said he'll spend 2014 focused on mobile commerce and mobilizing the digital customer experience.



Quote for the day:

"Everyone needs to be valued. Everyone has the potential to give something back." -- Princess Diana

July 13, 2014

Random Thoughts..
Sometimes you have to chuck it all, and start all over again, (sigh). So I did, and if you've visited this article before, you will notice that all the methods I tested previously to create random data have vanished.. but trust me, that's a good thing. So, I spent last night racking my brain, trying to figure out how to make this stronger, that is, fortify it against any attempts at reversal, and at the same time, perhaps produce even better random from an algorithm. What follows, is the best ideas I've had to date, combined in a way that I believe creates a better random generator, better then anything I have come across thus far in my research. For lack of a better name, I'll call it LOKI, because it is a tricky and unpredictable little fellow.


Can Enterprise Architecture Reinvent Itself for the Agile Movement?
Clearly, we need to re-invent Enterprise Architecture to retain its best features and avoid technical debt while adapting to the new realities of agility and the cloud. How on earth do we do that? ... Please note that I’m not advocating any abdication of EA responsibility. We need a clear central vision of what we’re building to avoid technical debt and ensure that systems mesh together well. However, the way we accomplish this must change with the times. We need to be able to clearly and quickly communicate our vision and become active evangelists for it, contributing to real work as we encourage our team members to do the right things.


“Governance Now” for Financial Reference Data Governance
Reporting and analytic functions are often hostage to the integrity and long-term quality of reference data. The consistent use of valid, accurate reference data values is critical to reporting because these values often drive reporting dimensions for sorting, grouping sub-totaling and other calculations. Long-term consistency is critical for basic trending and period over period comparisons. Multi-dimensional and new forms of analytics rely on consistent accurate values to produce fully populated, sorted and calculated views of enterprise performance. Gaps in dimensional stability related to reference data quality impairments have driven major product categories in the software business to support multiple dimensional analysis (anyone here remember Razza?).


Quantifying the Impact of Agile Software Development Practices
The flexibility designed into the tool permits integration into a variety of workflows – rather than forcing the user to conform their workflow to the tool design. This flexibility turns out to be a significant source of variation for a research effort like ours because it limits the number of assumptions we can make about the workflow generating the data. As a result, the level of variation present in the data was an obstacle – one that cannot be overcome without information on patterns of practice (workflow design) that allow us to make valid inferences about patterns of team behavior that correspond to patterns of performance in the data.


The Value of Information Governance: Finding the ROI
Information governance is the set of multi-disciplinary structures, policies, processes and controls implemented to manage information. Gartner states that "the goal of information governance is to ensure compliance with laws and regulations, mitigate risks and protect the confidentiality of sensitive company and customer data." More than another word for "records management," information governance aims to support an organization's regulatory, legal, risk, environmental and operational requirements.


Out-of-band Initial Replication (OOB IR) and Deduplication
Ah… so if you were expecting that the VHD data would arrive into the volume in deduplicated form, this is going to be a bit of a surprise. At the first go, the VHD data will be present in the volume in its original size. Deduplication happens as post-facto as a job that crunches the data and reduces the size of the VHD after it has been fully copied as a part of the OOB IR process. This is because deduplication needs an exclusive handle on the file in order to go about doing its work. The good part is that you can trigger the job on-demand and start the deduplication as soon as the first VHD is copied. You can do that by using the PowerShell commandlet provided:


The Difference Between Tools and Solutions for Data Governance
The point is, any tool can take on a job and do as it’s told to do – but who sets the policy and strategic planning to instruct those tools? Data governance is a journey, which not only requires tools, but also an intuitive master plan that incorporates process, objectives and measurable results. It follows a path that should be guided by best practices. Solutions developed and lessons learned should be reusable and applied to reduce rework and maintain consistency in future projects. Because data governance is data-intensive, attention should be paid to assessing which data is or is not valuable to the business. This can be defined as “field value” versus “context value.”


Big Data Governance Software: Sensitive Data
The pivotal advantage to employing Dataguise’s Big Data Governance software (which is effective on traditional data as well) is that it expedites and automates the process of implementing governance rules for sensitive data that is potentially discoverable. The traditional manual process involves creating governance policies and employing IT to modify and search through information systems to find and appropriately tag sensitive data. When such information involves Big Data (with their rapid velocity and myriad forms) found in time-sensitive financial and health care industries, such a process swiftly becomes outdated.


Cindy Walker on Data Management Best Practices and Data Analytics Center of Excellence
There are several trends taking place in the field of semantics. One trend is the industry-focused collaboration efforts to develop and link domain-specific standard ontologies, such as the Financial Industry Business Ontology (FIBO), to facilitate information sharing and regulatory reporting. Use of standard ontologies can help organizations connect disparate data sets and can enable semantic queries across the web and data sharing with internal and external stakeholders easily (without restructuring the data or developing point to point interfaces.) FIBO is being developed in phases by volunteer contributors in the financial services industry (with some financial regulator participation) unbestyder the authority of the Object Management Group.


Implement Observer Pattern in .NET (3 techniques)
The Observer Pattern a.k.a. Publisher-Subscriber Pattern. You'll find various articles on how to implement the observer pattern in .NET framework using the assemblies provided in the framework. BUT, .NET framework has evolved over the years and along with that it has also been providing new libraries for creating the Observer Pattern. To give you all a brief idea of the Observer Pattern, it defines a one-to-many dependency between objects (Publisher and multiple Subscribers) so that when one object (Publisher) changes state, all the dependents (Subscriber) are notified and updated automatically.



Quote for the day:

"The most rewarding things you do in life are often the ones that look like they cannot be done. " -- Arnold Palmer

July 12, 2014

Virtual Panel: Real-world JavaScript MVC Frameworks
The Web Platform has gone a long way since HTML5 was first made popular and people started looking into JavaScript as a language that could do build complex apps. Many APIs have emerged and there is an abundance of content out there about how the browser can leverage all this. This specific article series will go a step further and focus on how these powerful technologies can be leveraged in practise, not to building cool demos and prototypes, but how real practitioners use them in the trenches. ...We'll also talk about technologies (like AngularJS) that go a step further, and define the future of how the standards and web development will evolve.


A new kind of network.
The challenge of building a network that allows us to connect based on who we “are” rather than other, more traditional, social constructs is that personal identity is much more complicated than concepts like “your friends” or “who you worked with”. Personal identity is hard enough for us to come to terms with ourselves, much less design a service around. However, there does seem to be potential here. It just doesn’t make sense that we should construct our social networks in the future based on physical categories like “where you happened to be born”, or “where you went to school”, or “where you worked’, when we are connected through digital means to every person on the planet.


The Efficiency CIO vs the Agility CIO
First with my EA hat on. For those who have talked to me about Enterprise Architecture you may have heard me espouse the view that it is frequently focused on efficiency. EA is a great tool for mapping out current state and understanding change. Two of the key outcomes of EA are highlighting gaps in change, infrastructure, applications and the organization as well as highlighting duplication. Duplication obviously yields opportunities for rationalisation and efficiency with what is typically an easy business case to assemble. As an EA it can be easy to become focused on making investments in IT highly efficient, keeping a simpler applications architecture and making the right large investments in IT.


Data Security And What Keeps CISOs Up At Night
Security personnel are increasingly having to think about the location of their data in a world where data is becoming ever-more distributed. That and the concerns that organizations have about governmental and private surveillance are yet another burden these overworked folks need to shoulder. Data security looks fundamentally different to how it looked in the past. There truly are no hard parameters for data: it exists within organizational premises, in the cloud, on all manner of social media, on mobile devices of every flavor and, increasingly as we move towards the Internet of Things, on distributed sensors. A recent survey aims to expose the biggest issues that data security staff have to face.


Big Data: No Hoarding Allowed
"We would highly discourage storing it in a fashion that's sort of the definition of big data -- where you have it in some SSD environment on Amazon, or on a rack of servers that are costing you a fortune -- because you're not getting value out of it," he said. "You're not asking questions because it's just too big." Still, companies often become data hoarders. "They're living in the hoarder's environment," said Atkinson. "They're taking in all the data and putting it into a repository." One alternative: Rather than saving every bit, companies should determine the questions they want to ask of their data, and then store the indexes they really need, a move that "will take your data down by many factors," he claimed.


Cloud computing: Sky is the limit for IT firms
"We see flavours of cloud computing in most of our large outsourcing contracts," said Anand Sankaran, president and global head of infrastructure and cloud computing at Dell. "Though the cloud component in large contracts could be only 20-25 per cent of the total order, it has 80 per cent of the weight in the final decision." Sankaran added if any Indian infotech services provider was not making serious investments into creating capabilities around cloud computing, it was making a big mistake.


Dataguise Offers Data Governance Solution For RDBMS And Apache Hadoop
Dataguise for Data Governance enables organizations to easily declare policies, discover sensitive data, view and track entitlements, and audit access to sensitive data – automated across transactional databases, data warehouses, file shares, Apache Hadoop, and other Big Data sources. Initial supported platforms include Oracle, IBM DB2, SQL Server, Teradata, Cloudera, Hortonworks, MapR and Pivotal HD. Dataguise for Data Governance is fully compatible with DgSecure, Dataguise’s flagship platform for data privacy, protection and security for sensitive data across the enterprise.


The true impact of Heartbleed on the enterprise
As the Heartbleed OpenSSL incident became more widely known, the digital certificate-issuing authorities around the world also found themselves challenged to support the massive and sudden demand that literally appeared overnight at their collective doorstep. Although not a lot has been written about what is essentially a supply chain issue having to do with equipping the relevant parties with enough new digital certificates in time, industry experts agree that this delay points to broader fundamental issues that are worthy of being addressed in the near future from a supply chain and infrastructure viewpoint.


IT pros shouldn't expect a real vacation
Digital transformation efforts are ramping up from start-up mode to active enterprise deployment, according to a new survey from McKinsey & Company. In a report entitled "The Digital Tipping Point," the research firm notes that a majority of CIOs, CEOs and CMOs are now involved in digital projects. "And a significant number of executives feel that digital will play a prime role in driving organizational growth for the next several years," says an article at CIO Insight. Despite the increased activity, however, there remain a number of obstacles to successful digital transformation at many organizations.


How cloud computing can strengthen IT's control
Ironically, as organizations use more and more cloud resources, IT has a new way to reassert itself, even if users continue to get their own services. That way is the service catalog, a collection of public cloud and local services stored in a huge registry, much like in the days of SOA. These services are tracked in terms of who can use them and how they use them, and the service catalogs become the single jumping-off point for building and deploying applications that use public cloud services, as well as traditional systems. IT can bring order to chaos, while still providing the benefits of flexibility and immediacy that got users to go to the cloud in the first place.



Quote for the day:

"You can't talk yourself out of problems that you behave yourself into" -- Steven Covey

July 11, 2014

6 Crucial Data Security Lessons the U.S. Can Learn from Other Countries
Silicon Valley may have led the digital revolution, but Washington shows few signs of adapting to the times. As a result, not only is it easier to vote for a YouTube video than for a politician, but countries like Estonia are focusing their resources and building their infrastructures to better manage a digital society. Estonia is already more advanced than the U.S. in terms of digital consciousness. Other nations like Indonesia are rapidly becoming more connected without the legacy infrastructure package present in more established countries. Soon, Indonesia will be entirely mobile. The U.S. needs to make major changes to its data infrastructure to keep up, and the first step is to mimic these innovations already in place around the world.


Cloud Security Brokers Play a Key Role
As enterprises large and small embrace cloud computing, including SaaS, a need arises for someone to sit in the middle and manage the connection between SaaS service provider and user. Enter the cloud access security broker, or CASB. The research firm Gartner placed the cloud access security broker at the top of a list of the 10 most important technologies for information security that it unveiled at the Gartner Security & Risk Management Summit held in late June, in National Harbor, Md. Gartner defines CASBs as "security policy enforcement points placed between cloud services consumers and cloud services providers to interject enterprise security policies as the cloud-based resources are accessed."


Microsoft CEO Nadella: Windows is over, the future is mobile and the cloud
"More recently, we have described ourselves as a 'devices and services' company. While the devices and services description was helpful in starting our transformation, we now need to hone in on our unique strategy." So what is the new direction? It's a bit vague and hazy, but it's clearly not tied to Windows. In fact, he doesn't even get to mentioning desktop Windows until the 21st paragraph. Even then, he gives Android and iOS equal play with Windows, because he talks about the company's Enterprise Mobility Suite, which in his words enables "IT organizations to manage and secure the Windows, iOS and Android devices that their employees use, while keeping their companies secure."


The dentist will scan you now: The next generation of digital dentistry
The best integrated technology in the dental practice may have been the most humble: scheduling software on the computer in the exam room that enabled the hygienist to book my next appointment so it would coincide with the dentist's schedule and her schedule, and to provide me with the dentist's email address so I could follow up if I had any questions. Despite the promise of robotic dentistry, there's still no replacement for a professional, kind human on the horizon any time soon.


The Amazing Big Data World of Kaggle and the Crowd-Sourced Data Scientist
Although it is frequently reported that they have “over 100,000 data scientists”, these are actually registered users and competitors rather than employees. There are no qualification or experience barriers to registering as a Kaggle data scientist, previous winners have ranged from data science academics and professionals to enthusiastic, knowledgeable amateurs. However certain competitions are occasionally reserved for “masters” – those who have shown they have the right stuff through their previous work with Kaggle. The company also also recruit its own staff to work on internal projects. In fact they are advertising for recruits now – and although no requirements are listed, other than that applicants be “experienced”, two questions on the application form ask for the mean and standard deviation of two sets of numbers.


The Failure of the Modern Project and How We Fix It
A project is a relatively simple construct which has taken on gigantic proportions in IT. A project is simply a means of changing our current state to some future state using a series of tasks within an allotted timeframe. The project itself is a tool to accomplish a goal and the goal is almost never about completing the project. I do admit that certain compliance projects are time sensitive as well as certain time to market projects. But effectively, someone comes up with an idea which should accomplish a business goal. Whether the goal (performance driven) or the idea (innovation driven) comes first does not necessarily matter as long as they both end up measurable.


What is a Botnet?
Computers in a botnet, called nodes or zombies, are often ordinary computers sitting on desktops in homes and offices around the world. Typically, computers become nodes in a botnet when attackers illicitly install malware that secretly connects the computers to the botnet and they perform tasks such as sending spam, hosting or distributing malware or other illegal files, or attacking other computers. Attackers usually install bots by exploiting vulnerabilities in software or by using social engineering tactics to trick users into installing the malware. Users are often unaware that their computers are being used for malicious purposes.


Is Enterprise Architecture Completely Broken?
In fact, the notion that the practice of EA has become all about documentation rather than effecting business change is a common theme across many boardrooms and IT shops. EA generally centers on the use of a framework like The Open Group Architecture Framework (TOGAF), the Zachman Framework™, or one of a handful of others. Yet while the use of such frameworks can successfully lead to business value, frameworks like TOGAF and Zachman “tend to become self-referential,” according to Andreetto, where EAs spend all their effort working with the framework instead of solving real problems. “Frameworks are cocaine for executives – they give them a huge rush and then they move to the next framework,” he adds.


eBook: Entity Framework Code First Succinctly
Follow author Ricardo Peres as he introduces the newest development mode for Entity Framework, Code First. With Entity Framework Code First Succinctly, you will learn the ins and outs of developing code by hand in Code First. With this knowledge, you will be able to have finer control over your output than ever before.


Navigating Innovation’s Perilous First Mile
“Either end of the spectrum is dangerous. At one extreme is ‘paralysis by analysis.’ Too many innovators create elegant pieces of Microsoft fiction. The Excel spreadsheet features ‘what if’ analyses and pivot tables that would rival those created by a seasoned investment banker. The PowerPoint document is stunning, with charts and visuals comparable to Al Gore’s award-winning presentation on climate change. And the Word memo summarizing it all features prose that is so lucid that somewhere Malcolm Gladwell is shedding a tear. The plan looks airtight on paper, but in reality, it is incredibly brittle. As Intuit’s Scott Cook once quipped, ‘For every one of our failures, we had spreadsheets that looked awesome.’



Quote for the day:

"Chance has never yet satisfied the hope of a suffering people." -- Marcus Garvey

July 10, 2014

Controversial data center building and operation practices
There are many new and exciting data center building design, configuration and operation choices, but many of them involve trade-offs. These newer standards and best practices have adherents and detractors, and potential detrimental effects or poor return on investment won't always be immediately obvious. Even some standards required by building codes are nonetheless controversial. The specific concerns surrounding hot-aisle containment designs and safety warrant their own discussion.


The Impact of Big Data on Linguistics
Linguistics is an area that is constantly changing from one day to the next. There’s no stopping the evolution of language, and with the web and social media the speed at which it’s evolving has increased dramatically.There are so many contributing factors to language that impact how and when it changes that it can be extremely difficult to track and completely understand what about the language is changing and why it’s changing. Big data technology, like Hadoop Hive, is vital in assisting interested parties in gaining deeper and clearer insights into linguistics. It simplifies processes from weeks and months to seconds and minutes. It opens up possibilities that weren’t available before. Big data takes linguistics to the next level.


Provisioning versus Configuration
It is important to recognize the difference between these two steps in the deployment process and take into consideration the impact of configuration after provisioning on that process. Depending on the method of configuration, this step can have a serious impact on the speed and efficiency of the deployment process as a whole. It is also important to note for monitoring purposes, as virtual machine health and status is not the same as the health and status of the service, whether that be an application or a network service. Both must be monitoring and managed in a virtualized infrastructure to meet MTTD (mean time to detection) and MTTR (mean time to resolution) objectives.


Botnet aims brute-force attacks at point-of-sale systems
Micros Systems is based in Columbia, Maryland, and provides software applications, services and hardware systems, including POS terminals, to the hospitality and retail industries. If the BrutPOS malware successfully guesses the remote access credentials of an RDP-enabled system it sends the information back to a command-and-control server. Attackers then use the information to determine whether the system is a POS terminal and if it is, to install a malware program that's designed to extract payment card details from the memory of applications running on it.


Building A Security-Aware Culture
Awareness and training is one of the most effective elements to any information security program because most of the risks that organizations face are caused by user error, misconfiguration of systems or mismanagement. In fact, according to IBM’s 2014 Cyber Security Intelligence Index, 95% of all attacks in 2013 involved some type of human error, the most prevalent being an employee double clicking on an infected attachment or URL. The goal of an information security awareness and training program is to stop these errors from taking place by educating users on their responsibilities for ensuring the confidentiality, integrity and availability of information as it applies to their roles within the organization.


My “Desert Island Half-Dozen” – recommended reading for resilience
When I speak with customers, they often ask how they can successfully change the culture of their IT organization when deciding to implement a resilience engineering practice. Over the past decade I’ve collected a number of books and articles which I have found to be helpful in this regard, and I often recommend these resources to customers. I’ve included my favorites below, in no particular order, with a short explanation of why I’m recommending them.


Shift Left Performance Testing - a Different Approach
This article will explain a different approach to traditional Multi User Performance testing; using the same tools but combine them with modern data visualisation techniques to gain early insight into location specific performance and application areas that may have "sleeping" performance issues. Most programs concentrate first on functionality and second on anything else. Multi User Performance Testing, performed with tools like HP LoadRunner or Neotys Neoload, usually is one of those activities that happen late in the testing cycle. Many times this happens in parallel with User Acceptance Testing when the new system is already exposed to the end users.


Finance Analytics Requires Data Quality
A main requirement for the data used in analytics is that it be accurate because accuracy affects how well finance analytic processes work. One piece of seemingly good news from the research is that a majority of companies have accurate data with which to work in their finance analytics processes. However, only 11 percent said theirs is very accurate, and there’s a big difference between accurate enough and very accurate. The degree of accuracy is important because it correlates with, among other things, the quality of finance analytics processes and the agility with which organizations can respond to and plan for change.


Considering cloud service tiers
As enterprises move to public cloud-based resources, the use of application and data categories will play more important roles, for the same reasons listed above. For instance, there are public cloud storage services that are guaranteed to support SLAs (service level agreements) that approach 100 percent up time, but the costs are much higher per gigabyte of storage. Of course, there are public cloud services that don’t offer the same amount of up time, but charge way less. You need to match the right storage or compute services to the right use of those services by application tier, based upon cost-to-value. Again, we’ve been doing this for years with hardware and software, now we’re just extending this to the use of cloud-based services. The concepts should not be new, for most enterprises.


The Right Fit: The Enterprise Architect Selection Dilemma
With the increasing focus on mapping Enterprise Architecture value towards delivering business outcomes, it may be time to start re-evaluating the process of hiring and career development of this vital role. And there are organizations that have recognized this. Waddell and Reed’s listing on LinkedIn, if it is still up, is a good example of a well-defined EA role. IASA’s skills matrix and job descriptions for architects can also serve as a useful reference for this purpose. IASA’s EA job description lists around fifteen distinct job responsibilities, with additional sub-items around knowledge management and engagement. IASA also lists twenty separate criteria covering education, skills and experience for an Enterprise Architect.



Quote for the day:

"Leadership cannot really be taught. It can only be learned." -- Harold S. Geneen

July 09, 2014

The Agile BI Ship has Sailed — Get On Board Quickly or Risk Falling Behind
Do not use the term Agile BI synonymously with the terms intuitive and user friendly — two hugely overused and hyped terms in BI. Unfortunately, these terms are highly subjective and qualitative. Point-and-click, drag-and-drop GUIs may be intuitive to an experienced professional with a background in command line interfaces, but not so obvious to a millennial who grew up with a thumb-typing mobile phone UI. And while menu- and prompt-driven instrumented (radio buttons, dialog boxes, etc.) applications may seem user friendly to left-brained people (who think in numbers and lists), right-brained office workers (who see the world in pictures and associations) may prefer an application driven by icons, visual associations, and artistic Infographics.


Want to innovate? Become a "now-ist"
“Remember when people used to try to predict the future?” In this engaging talk, the head of the MIT Media Lab skips the future predictions and instead shares a new approach to creating in the moment: building quickly and improving constantly, without waiting for permission or for proof that you have the right idea. This kind of bottom-up innovation is seen in the most fascinating, futuristic projects emerging today, and it starts, he says, with being open and alert to what’s going on around you right now. Don’t be a futurist, he suggests: be a now-ist.


Google Tests Personal Data Market To Find Out How Much Your Personal Information Is Worth
Unsurprisingly, people value certain kinds of information more highly than others. But exactly how they value it depends on a complex set of other factors, such as the conditions under which information was gathered. The experiment involved a kind of living lab in Italy that monitored people continuously. The team recruited 60 people to take part in the study and gave them each a smart phone that recorded phone calls made and received, which applications were in use at any time and the time spent on them, the users’ locations throughout the day and the number of photographs taken.


How to dilute the value of analytics
Business Intelligence (BI) can mean many things to many people, but generally BI is associated with business reports. When you fold business analytics (BA), especially advanced analytics that are predictive or prescriptive, under the BI umbrella you inherently dilute the value proposition that analytics can provide to an organization. Why is this important? Because everyone knows analytics is hot, so everyone today is selling some kind of analytics. When we allow business analytics to be synonymous with BI, we allow everyone's claims that they can "do analytics" appear to ring true.


Free ebook: Rethinking Enterprise Storage: A Hybrid Cloud Model
Rethinking Enterprise Storage: A Hybrid Cloud Model describes a storage architecture that some experts are calling a game changer in the infrastructure industry. Called the Microsoft hybrid cloud storage (HCS) solution, it was developed as a way to integrate cloud storage services with traditional enterprise storage. The author, Marc Farley, works at Microsoft on hybrid cloud storage solutions as a senior marketing manager. The book includes a Foreword by storage industry expert and noted blogger Martin Glassborow, better known in the industry as Storagebod.


This is what the new hybrid cloud looks like
Leong says hybrid cloud management is not about bursting, instead customers should think about supporting two basic IT environments today: an old one and a new one, what she calls “bi-modal’ management. The old environment is typically a company’s system of record that is heavily customized to the organization’s specific use case and serves a core function for the business. The new IT environment is where the company pursues leading edge projects; applications and software are developed rapidly, with fast iterations and quick launches. And IT has a challenge: “You don’t want your old stuff to slow down your new stuff,” Leong says. “If you try to blend those two you’ll end up doing neither one well.”


CloudPhysics Adds Virtual Storage Troubleshooting Service
Cloud Physics is a new kind of online monitoring company that analyzes the data from many customers to see what's working where and what isn't. Then as fresh trouble brews, its analytics system consults the knowledge base and alerts customers to the remedies. Its monitoring service can spot underlying hardware issues, such as firmware bugs or device incompatibilities, as well as report on the overall operational health of a virtualized environment. Unlike other systems monitoring, however, it claims to be predictive and prescriptive, allowing customers to take actions that head off trouble before end users are inconvenienced or systems are brought to a halt.


Panel tackles how to make mobile devices as secure as they are indispensable
As smartphones have become de rigueur in the global digital economy, users want them to do more work, and businesses want them to be more productive for their employees — as well as powerful added channels to consumers. But neither businesses nor mobile-service providers have a cross-domain architecture that supports all the new requirements for a secure digital economy, one that allows safe commerce, data sharing and user privacy. So how do we blaze a better path to a secure mobile future? How do we make today’s ubiquitous mobile devices as low risk as they are indispensable? BriefingsDirect recently posed these and other questions to a panel of experts on mobile security:


Simplifying IT Pays Off With Big Savings, Better Business Success
IT organizations that support demanding business requirements often find they need to support greater levels of complexity. The business side wants better accessibility for users and easier access into customer data. Ironically, as technology gets simpler for end-users its gets more complicated behind the scenes. Complexity is a fact of life for IT professionals, but according to a new IDC study, corralling that complexity can save enterprises big-time and improve business outcomes.


Architecting for big data
The disjunction between accurate and fast will only grow as big data gets bigger. As the Internet of Things (IoT) moves in, IT departments will face ever more infrastructure bottlenecks. Jarr said the three most common points of congestion are ingesting more and new sources of data, developing processes to quickly access that data to make data-driven decisions, and producing faster analytics for the business. Removing the roadblocks will "take fast data and start making it very smart data," he said. The problem may be that IT has simply outgrown its legacy relational database management systems (RDBMS).



Quote for the day:

"I hear and I forget. I see and I remember. I do and I understand." -- Chinese Proverb

July 08, 2014

Use virtual volumes vs. SDS in the fight for storage efficiency
Freewheeling application cut and paste is just the beginning of the benefits, advocates say.Software-defined storage also means you don't need to add steps to provision new storage to the guest application when needed, or to ensure the proper services are associated with the new storage (data protection services, thin provisioning, deduplication and so on), or to change parameters and processes for managing storage with each configuration change. These things would all be enabled in the brave new world of server-attached, software-defined storage in a way they never were in legacy SAN or NAS, according to evangelists.


Comparing Cloud Compute Services
Comparing cloud compute or servers is a different story entirely. Because of the diverse deployment options and dissimilar features of different services, formulating relevant and fair comparisons is challenging to say the least. In fact, we've come to the conclusion that there is no perfect way to do it. This isn't to say that you can't - but if you do, or if you are handed a third party comparison to look over, there are some things you should keep in mind - and watch out for (we've seen some poorly constructed comparisons). The purpose of this post is to highlight some of these considerations. To do so, I'll present actual comparisons from testing we've done recently on Amazon EC2, DigitalOcean, Google Cloud Platform, Microsoft Azure and SoftLayer.


Larry Page on Google’s Many Arms
Mr. Page, who was joined in the interview by Sundar Pichai, the executive in charge of Google’s Android and Chrome software projects, did not seem overly bothered by the outbursts. “We’re in San Francisco, so we expect that,” Mr. Page said of the protests. “There’s a rich history of protest in San Francisco.” Mr. Pichai pointed out that the company had introduced initiatives to improve its relationship with city residents. This year, it gave $600,000 to the city to bring free Wi-Fi service to San Francisco parks. “I think in some ways it’s good that there’s an open debate about it and I think we needed it,” Mr. Pichai said. “There’s been a lot of growth and the area is trying to adapt to that growth and that has been a concern.”


Databricks Unveils Spark as a Cloud Service
“One of the common complaints we heard from enterprise users was that Big Data is not a single analysis; a true pipeline needs to combine data storage, ETL, data exploration, dashboards and reporting, advanced analytics and creation of data products. Doing that with today’s technology is incredibly difficult,” said Databricks founder and CEO Ion Stoica. “We built Databricks Cloud to enable the creation of end-to-end pipelines out of the box while supporting the full spectrum of Spark applications for enhanced and additional functionality.” Spark provides support for interactive queries (Spark SQL), streaming data (Spark Streaming), machine learning (MLlib) and graph computation (GraphX) natively with a single API across the entire pipeline.


MapR Looks to Enhance Hadoop Accessibility with App Gallery
“Hadoop is a wonderful platform for doing large scale analytics on all different types of data, as long as you have got the right people running it that know what to do with it,” said John Webster, Senior Partner at Evaluator Group. “And sometimes those people can cost a lot of money. So there has been desire from the enterprise side to say, ‘Look, can you give us something easier to use to manipulate and get value from Hadoop other than going out and hiring the expertise?’ So this app gallery starts to fill that hole.” The app gallery also makes it easy for developers to submit their applications.


Top hardware firms join forces on IoT standards
The OIC is focused on defining a common communications framework based on industry standard technologies to connect and manage the flow of information across IoT devices. The goal is to design of products that intelligently, reliably and securely manage and exchange information under changing conditions, the group said in a statement. "Open source is about collaboration and choice,” said Jim Zemlin, executive director of The Linux Foundation. “The Open Interconnect Consortium is yet another proof point how open source helps fuel innovation," he said.


Rollback and Recovery Troubleshooting; Challenges and Strategies
Changes to the structure and code of your databases can go seriously wrong, leading to down-time and data loss. Obviously, you’ll do anything possible to prevent this happening but this will just reduce the probability of things going wrong. The chance still exists of having a failed deployment, and you need to have effective ways of recovering from an event like this as quickly and effectively as possible. Have you the best possible ways to ensure that you can smoothly recover from your deployment disasters? What are the trade-offs of these various approaches? This article will walk through the different mechanisms you can use to ensure you have at least one effective documented procedure, hopefully more, to recover from, or even undo, a failed deployment.


Auto-Autonomy: Cars Are Racing Toward Disruption
The unbundling of features of cars such as keys, personalized maps and entertainment mean that I can walk up to a car, tap in and drive off comfortably. I can also summon an equally convenient ride with precise GPS location. These benefits are extended to commercial fleets of vehicles, which suffer these same inefficiencies on a microeconomic scale. You can see shared fleets of cars using sharing technology that keeps cars in use and reduces the number of cars on the road — better for the owners, better for the roads and better all around. Needless to say, fewer cars is a massive disruption to the auto industry.


There Are No 'Kodak Moments'
Kodak was a technical treasure-chest, but the problems that it faced were more marketing than technical, and had less to do with the product(s) than they did with the role that the products played in the customers’ lives. Kodak lacked the ability to either interpret those roles or articulate them in a way that could drive innovations with a higher probability of adoption. It undoubtedly did not help that Kodak attempted to reduce the risks it was under in the imaging business by diversifying (and dispersing scarce resources and top management attention) into such unfamiliar businesses as pharmaceuticals [with its purchase of Sterling Pharmaceuticals], which further blurred the vision of what the firm stood for and what it aspired to achieve.


Understanding the Android Resource System
A large part of any Android application falls under the category of resources. In this context, resources can include things like layouts, images, audio, video, language definitions, styles and so on. The resource system in Android is quite powerful, and while it may seem odd at first, there's a method to its madness. In this article, I'll walk through the basics of how this system works, and how you can take advantage of it in your apps. When you create a new Xamarin.Android application, some resources are provided by default, and can be found in several subfolders under the main Resources folder. I'll start by taking a look at those folders and files provided in the default project.



Quote for the day:

"The best way to have a good idea is to have lots of ideas." -- Linus Pauling