Jump to content

Dec 08

Survey Uncovers Financial Industry Challenges

Financial services organizations rely heavily on information found on public websites, social networks, and web portals to monitor markets, track the competition, identify suspicious fraud activity, maintain sanction lists, automate processes with B2B partners, and listen to what customers are saying. Access to these external sources of both structured and unstructured information typically requires manual integration. This leads to tedious searching, copying and pasting of data into spreadsheets, databases, or applications. This information is very often time sensitive, so organizational reliance on manual processes defeats the notion this gathered information is timely. Time is money. Kapow-FS-survey-infographic-Blog-Post

These organizations also depend on an IT infrastructure to meet these needs and data integration requirements must address the growing need of accessing external data sources. Integrating internal systems with external data sources can be challenging to say the least, especially when organizations are constantly adding new external sources of information to their operations, and these external websites and web portals either don’t provide APIs or the development efforts are too time consuming and costly. Keep in mind, if IT is struggling to keep up with demands of the business to add new sources of information and eliminate manual repetitive activities involving the handling and processing information now, the chance of keeping pace is a target that becomes unobtainable for most financial services organizations.

A recent Computerworld.com survey of 100 financial services professionals highlights the challenges of acquiring and integrating data from multiple data sources, including external websites and web portals. This survey revealed troubling challenges facing the financial industry. Here’s a few highlights:

  • Struggles of integrated external sources: 43% of the participating financial institutions are struggling with a lack of integration between external data sources and internal systems. Of those external sources that financial services organizations need to integrate with, 84% are web portals where business information needs to be extracted and integrated with internal systems and processes.
  • Manual or hard-coded integration: 55% of respondents reported that the integration of data between external sources and internal systems either involves users manually transferring data, or is done through custom integration development that involves a more hard coded approach that does not scale out to support the many external sources of data.
  • Manual data handling: These financial organizations identified the time required to manual import data and perform validations, as the two most costly results challenges when it comes to not being able to integrate external data sources.
  • Deployment delays: Overall, respondents want a solution that quickly adapts to varying data sources, unfortunately integration projects often takes months to complete. Only 8% of the financial services responding indicated that an external information integration project is completed in less than a month, and 31% reported it takes more than 3 months to complete, illustrating the need to find a faster and more efficient way to perform external data integration.

The bottom line is manual processes no longer fit into any financial organization business process. It’s clear these time consuming development projects used to integrate external data sources into an enterprise infrastructure are not a long-term viable strategy.

Financial organizations depend on data, whether it’s being used to transform industries, grow market share, defend brands, or protect customers. It takes an alternative approach to integrating data, one that cannot simply rely on traditional development tools and custom one-off projects. Data integration platforms that are easy to deploy and customize are the next step for external data integration.

Download the complete IDG survey here.

Tagged with:    
Nov 06

KapowNow Webinars AdWe’re so excited to announce the KapowNow webinar series. This is not just another set of webinars. We’ve specifically designed them so everyone can fit them into their busy schedules. They are short! In less than 30 minutes, you can learn about application integration, process automation, rapid creation of business applications, Big Data, social media monitoring, mobile enablement and much more.

Watch live demos and get all the tips and tricks you need to make integration more stress free, your business users more empowered, your company more agile and innovative. This series will show you how to use Kapow Katalyst and Kapow Kapplets as the single application integration platform, solving disconnects across the organizations such as getting analysts their data, providing employees mobile access to legacy applications and eliminating manual process – just to name a few examples.

Register today and spread the word by tweeting and using the hashtag #kapownow.

By: Hila Segal, Director, Product Marketing

Nov 18

I’m flying back from New York where I presented “The Moneyball Approach to Big Data – Creating an Unfair Advantage” at the Wall Street Technology Association’s Hot Technologies Forum. Big Data is an area technologists are curious about, but I’m concerned there’s a “wait and see” approach. My job is to create value for my customers, and I’d hate for any of you to miss out on this opportunity.

Skepticism or “late adopter” mentality is understandable – if you want to forego a low-risk, high-reward opportunity and let your competition gain the advantage. Everyone is benefitting from Big Data in some form or another – most probably don’t even know it. But believe me, there are 100s of scenarios I could walk you through that could save your company millions of dollars, grow revenue double digit percentages, create more personalized products that delight your customers, automate real-time feedback on your brand, products, and competitor prices, create your own custom research that gives allows you to see trends before your competitors, and overall make you a much more agile business that scales with your new found vigor and growth.

What’s the secret to Big Data rewards? “Relevance”, “Access”, “Intelligence” and “Action”.

The most common definition I’ve seen for Big Data relates to the 3 Vs:

  • Volume: it’s Big – terabytes and petabytes
  • Variety: it comes in many forms – internal, external, structured and unstructured
  • Velocity: it grows fast and changes quickly – making real-time capture and action hugely important

And this is always supported by numbers showing how gynormous Big Data is:

  • The New York Stock Exchange creates 1 terabyte of data per day (InformationWeek)
  • 10,000 payment card transactions are made every second around the world (American Banker)
  • 30 billion pieces of content shared on FB every month (McKinsey)
  • Twitter feeds generate 8 terabytes of data per day (InformationWeek)

Before you go out and buy more storage, consider what you want to do with it. If there are 200M tweets a day equaling 8 terabytes of data, but only 1000 of the tweets relate to your product or company, do you need to store and analyze all 8 terabytes every day? Although Big Data is big, don’t get caught up in all the massive numbers. Stick with what’s relevant to your business.

Forrester Analyst Brian Hopkins made a great point in his blog “Big Data will help shape your markets next big winner”, stating that Forrester estimates enterprises use only 5% of their available data. So the playing field is wide open for anyone to quickly take advantage of the 95% they’re currently ignoring.

But slow down there pardner. Sybase published Big Data, Big Opportunity that stated, “for the median fortune 1000 company… a 10% increase in usability of data translates to an increase of $2.01B in total revenue per year, [and that] a 10% increase in accessibility to data translates to an additional $65.67M in net income per year.” So don’t think you have to go from 5% to 100%. You really only need to go from 5% to 5.5%.

The internet plays a huge role in the rapid growth of Big Data, giving individuals the ability to post and upload immense amounts of pictures, text, video, and mobile data, and businesses the channel to offer access to customers and partners through web-based applications (think Oracle, salesforce.com, social media, procurement, logistics, publishers, and so on).

In reviewing other articles about Big Data, despite all the discussion around the massiveness of Big Data, I didn’t find a single article mentioning the difficulty of accessing the data spread throughout all these applications. This is a HUGE POINT to understand because you are SOL if you can’t access the data you need. If I told you I could guarantee any app or data you can see in your web browser (customer data, bank transactions, twitter, blogs, supply chain vendors, government data, competitor prices, etc.) could be automatically accessed and loaded into the app, database, or spreadsheet of your choice, how many game-changing Big Data projects could you think of? Point-in-time cash position understanding of billions of dollars across 300 banks? No problem. Monitor competitor pricing on 50,000 SKUs every day? Simple. Automate a twenty-three step manual invoicing process to get paid millions of dollars 2 days faster? Done. Real-time, automated access to the data you need is the key to success with Big Data. Lest you think this all fantasy, learn how Kapow Katalyst Application Integration Platform provides real-time access to Big Data.

There’s huge difference between “I have terabytes of data – videos, satellite pictures, social media conversations, and research reports” and “I know where Osama Bin Laden is”. It’s Data vs. Intelligence. Data is useless if you can’t extract meaningful intelligence from it. And the quality of your intelligence is most likely much less dependent on the volume than the relevance and ability to access it.

And the whole point of having relevant, accessible, intelligent Big Data is that it is actionable. Otherwise it’s just a recommendation or a strategy without execution What’s incredibly cool about Big Data and the web-based nature of so much of it is that just as easily as you can access anything you can see, you can just as easily transform the data, perform an operation on it, and automate a resulting action for you. Huh? Here’s an example. You know consumers and even your B2B purchasers research prices online and that loyalty to any one vendor has deteriorated as buyers have more pricing knowledge a search and mouse-click away. But you are smarter than your competitors because you’re already doing the extra 10%. So you set up automated monitoring of your competitor’s pricing, and when their price drops below yours your Big Data Integration Platform calculates the difference plus 10%, logs into your ecommerce site and adjusts your prices automatically, all within a few ticks of the clock.

And the beauty is that this can all be set up in hours, if not a few days, and you don’t have to bring in an army of developers or consultants to create custom code to do any of this.

So let the Big Data party begin. Kapow Software is here to help. To learn more about Big Data Solutions or to set up a Big Data Sales Consultation, click either link, because you’ve read this far and deserve it!

By: Rick Kawamura Rick Kawamura, Director of Marketing

Tagged with:          
Oct 28

The best thing about working in Business Development is meeting with partners and customers. It’s a great way to stay on top of technology trends, and my goal for this blog post is to keep you posted on developments I see on the road.

This year was Kapow Software’s first time exhibiting at Salesforce.com’s Dreamforce event where the buzz was all about the social enterprise and the value of collaboration and interaction in business and government. Kapow Software, together with our partner Threshold Consulting, made it to the final of the Salesforce Hackathon with a bi-directional integration between Salesforce Chatter and Google+ — a unique social integration feat because Google+ doesn’t support APIs.

We returned to Moscone Center in San Francisco for Oracle OpenWorld. Arik Hesseldahl, in his AllThingsD.com blog, offers an insightful analysis of the rivalry between Oracle’s Larry Ellison and Salesforce.com’s Marc Benioff, which we witnessed firsthand. Arik also explains the two visionaries’ divergent views of the cloud, which can be summed up as a hybrid environment vs. the pure cloud.

For our part, we knew our Founder and CTO Stefan Andreasen’s session on automating content migration into Oracle OpenWorld resonated when one attendee said, “it made the conference worth it in its own right.” Oracle and Kapow Software announced a Documentum trade-in campaign with a special offer for customers who are making the move from Documentum to Oracle WebCenter, using Kapow Software’s automated migration tools.

Next on my itinerary were two events for the intelligence community. I have never seen so many different national law enforcement agencies, as well as state and local police departments, as I did at ISS World in DC. They came for training on the technologies, techniques, and legal considerations of intelligence gathering and analysis. Back in California was Suits & Spooks, the so-called anti-conference designed to bring the greatest Silicon Valley entrepreneurs together with US intelligence agencies. (There wasn’t an actual suit to be seen anywhere.)

Having been involved with the technology side of intelligence for over 10 years, I’m astounded by how far we’ve come from simple reports and dashboards. The focus now is on social network analysis, geo-location-based visualization, and enhanced reality. But for all of the advances in analytics and visualization, the greatest challenge with intelligence continues to be getting access to the data, particularly as the majority of the data – big data – is outside the control of any one organization.

Last week presented the dilemma of choosing between two events: Pyxis Mobile’s Connect 2011 Summit and GEOINT 2011 Symposium hosted by the United States Geospatial Intelligence Foundation (USGIF). Kapow Software exhibited and presented at both, but I ended up choosing the Pyxis Mobile conference – and I’m glad I did. We met with a lot of great customers and partners, and my hat goes off to Chris Willis and Pyxis for organizing such a successful event. What was most enlightening for me is the impact that tablets (iPads and Androids) are having on enterprise strategies for mobilization. Most companies are developing strategies to mobile enable enterprise apps and were impressed with Kapow’s ability to integrate web application data without the need for APIs or any other programmable interface. Having resisted mobilization, IT seems to be forced to act finally by the ubiquitous “consumerization” of mobile devices. And tablets are starting to provide to field workers what has been promised for so long.

All in all, technological development in all of these areas is moving at neutrino speed. I’ll do my best to keep you informed. I’m back in the office this week, catching up on everything; hence, the timing of this post.

By: Rory Byrne Rory

Tagged with:                   
Mar 25

Data assembly is now the biggest barrier to good analytics

Business Intelligence continues to become more and more strategic to companies in order to compete in today’s global economy. Every department is now using analytics to better understand financials, business processes, customers, competitors and market trends – critical understanding needed to optimize execution.

As we all know, analytics is no better than the data behind it, and thus discovery and assembly of data has become an ever more important part of successful business intelligence.

As your company ecosystem grows beyond your firewall into partner apps, competitor websites and social networks, data rapidly spreads and more and more data assembly is now tied up in manual harvesting methods or the purchase of dubious data from vertical information providers.

This means that the knowledge worker spends more time with Data Discovery and Data Assembly, leaving less time for analysis of and execution on the results.

I often see scenarios where knowledge workers spend more than 50% of their time on just data assembly, time which takes away from analysis, reporting and execution.

This is not good.

And it’s exactly why more companies rely on automating the data assembly process. Finding methods to easily and scalably instruct which data to get from where and how to transform it into the needed format – basically they look for a solution to do automated data delivery.

The good news is that this solution already exists. The Kapow Katalyst platform is proven by more than 500 companies all over the world.

Here’s a concrete example. Fiserv, a large financial services company, needed to understand the value of their assets in real-time for compliance reasons. To solve this problem the treasurer hired a group of people to manually log-in to Fiservs accounts spread over more than 300 banks in more than 20 countries. This was expensive, error-prone, and data was often outdated.

Consequently, Fiserv looked for an automated solution and found Kapow Katalyst. Within 3 months they had built Kapow ETL robots that could automatically log-in to the web front-end of Fiserv accounts at all 300 banks and pull out the required information. Not only did this relieve the knowledge workers from manual data assembly it also gave the treasurer real-time data for point-in-time regulatory compliance.

Needless to say this created a lot of value for Fiserv.

I recommend you read the whitepaper, Hyper Management of Working Capital, written by Thomas W. Warsop, Group President for Fiserv.

By: Stefan Andreasen Stefan Andreasen

Tagged with:             
Oct 06

I recently read a great article about Data Urgency and how that relates to Data Value by Robin Bloor.

Robin’s writes “Data is urgent if it loses value while the receiver is waiting for it”.

Just think about the following analogy.

When you go to the supermarket to buy an apple, the price you’re willing to pay is directly related to how closely the apple looks like one you’d pick off a tree yourself, like this:
apple fresh
On the other if that apple isn’t fresh and tasty looking, it has no value at all.
apple rotten
This apple loses all its value between the time it was picked fresh and delicious from the tree and when it rotted and was delivered to you completely worthless.

Today I presented at the Corda Visual Evolution conference in Las Vegas.  I presented on “Using Kapow to enhance Corda CenterView with real-time Web data” where I discussed data urgency and how it relates to value.

One of the customer examples I presented was Fiserv and how they use Kapow to automatically aggregate financial account information from more than 300 banks in 10 countries and display the data in Corda’s CenterView dashboard for point in time regulatory compliance.

Previously, the treasury department had no other way to collect the bank data than to manually login to each of the 300 banks and cut and paste (i.e. pick) their financial transaction data into a spreadsheet. Due to the time it took to manually collect the data, not only was the data inaccurate, and thus out of compliance, but errors often arose due to the error-prone methods of manual data collection.

Thomas W. Warsop, Group President at Fiserv, wrote a detailed white paper about how “technology supports the work of corporate treasury” which you can download here to learn more.

Within Kapow’s customer base of almost 500 customers we see more and more examples of how “flawless data” is now “picked” 100% automatically, delivering critical real-time value to our customers.

The urgency of valuable data requires real-time automated data collection.

By:  Stefan Andreasen Stefan Andreasen

Tagged with:             
May 14

Social Media and BI are the sweet and sour, yin and yang, oil and vinegar topics of interest in BI these days.  Can the real-time, user-generated, free flowing tweets and online conversations of social media benefit traditional enterprise BI?

In the past week, Information Management published the following, Social Media Will Play a Big Part in BI’s Future.

No doubt the volume of social media is growing exponentially.  And surely, this data contains valuable information on competitive intelligence, product feedback, customer service, and even market trends.

But there’s a gap in social media data access.  Traditional BI tools can’t access all this unstructured data and present it in a usable format, let alone filter out all the noise.

What’s needed is an automated, flexible way to access hundreds or even thousands of sites in real-time, extract only the relevant content, add structure to the data, and load it easily into a database.  What’s needed is Web Data Services, and it exists today.

Social Media Data Access:

With hundreds of sites to monitor (most having no API access) and an already overburdened IT department, accessing social media data becomes the foremost hurdle to overcome.  With Web Data Services, all of this can be achieved with no coding.  Kapow robots (automated data collection processes) are easily created with visual point-and-click technology eliminating the need for complex, time-consuming coding and scripting.  If you can see the data in a web browser, Web Data Services can extract it.

Enriching Unstructured Data:

The trick is taking disparate text based tweets, comments, blog posts, online conversations, etc. and structuring them in a way that lets your analyst understand when it occurred, who said it, and how it applies to your keywords or hypothesis.  But getting there is harder than you might think.  Web Data Services surgically transforms unstructured social media web data to provide superior data quality without the noise.  Included, but not often talked about, is the ability to perform regular expressions (through a graphical interface), encoding and decoding, date formatting, string calculations, conditional expressions, numeric calculations, and multiple language support.

Making the data readily available:

Web Data Services makes it easy to output the structured social media data into multiple formats, such as a SQL database, vendor hosted database, Java or C# data structure, SOAP or REST Web service, RSS, CSV, or XML.

Social media is BI 2.0. It opens the doors to listen in on what people are saying about your brand, products and services, and also taps into untapped market opportunities and customer pain points.  So rather than reacting, you are out in front predicting future events and gaining first mover advantage.

By:  Rick Kawamura Rick Kawamura

Tagged with:                   
May 03

I just read an interesting blog post by Richard MacManus, “10 Ideas For Web of Data Apps” that explains the fundamentals of how people generate ideas for new applications.

My guess is that all examples described in the blog came from people combining what they saw on multiple websites into an idea of how to leverage the data in a new, valuable application.

However there is a problem here!

It’s not a given that all that data is available as Linked or Open data.  In other words, not all data necessarily has a documented method of programmatic access as, for example, an XML feed, RSS feed, or a REST or SOAP service. Without this programmatic access, no existing application or Mashup builder can get to the data which prevents these great ideas from ever materializing. WHAT A BUMMER!

Web Data App Idea Generation
More often than not the data you need to combine into your great new application idea is only available in a web browser. This means you have to either drop the idea or settle for a subset of the data available with documented programmatic access.

Wouldn’t it be cool if all the data you see in a web browser were always available?

Well that is what Web Data Services is all about. Check more of this blog to learn more.

As always, please send me your comments, my email is sa at kapowtech.com

By:  Stefan Andreasen Stefan Andreasen, CTO and Founder

Tagged with:                
Apr 23
Kate Gosselin on Dancing with the Stars  Photo Credit:  ABC

Kate Gosselin on Dancing with the Stars Photo Credit: ABC

Can Social Media be used to predict the outcome of Reality TV shows such as American Idol and Dancing with the Stars?  We created Reality Buzz based on our real-time automated web data collection platform to find out.

Jennifer Zaino over at Semantic Web wrote a nice article that captures the essence of Reality Buzz and our process of using real-time social media web data to build intelligence in to predictive analytics:  Taking Sentiment Analysis to Dancing with the Stars and American Idol

Check it out.  And if you have the need to automate the access, collection, harvesting, scrubbing, grabbing or scraping or real-time web data to improve market or competitive analysis to improve your strategic decision making, we’re here to help.

By:  Rick Kawamura Rick Kawamura, Director of Marketing

Tagged with:                   
Apr 12

Can oddball metrics from Craigslist apartment listings, Subway ridership tallies, Broadway ticket sales, city parking garage count of empty stalls, cardboard box production, or diesel fuel consumption be better economic predictors than traditional, months old government reports from the labor department?  Absolutely!

The Wall Street Journal just published “New Ways to Read Economy – Experts Scour Oddball Data to Help See Trends Before Official Information Is Available” where author Cari Tuna provides numerous examples of Economists around the country using non-traditional methods to better predict economic trends and direction.

The reason for this trend is that traditional reports and data are out of date and often not very accurate.  Who has six months to wait for a government report to make a decision?

Enter Web Data Services

What would make these oddball metrics more valuable and accurate?  Automating the collection process over multiple sources of data and loading it in to the database or BI tool of your choice.

Imagine you had the ability to automate the monitoring of hundreds of sources of data in real time and could react to changes overnight?  What data would you monitor?

Interest rates?  Gold Prices?  Credit Score reports?  Salesforce data?  Apartment listings?  Competitor’s pricing?  Product Buzz?  Customer complaints?  Financial transactions?  Bank balances?  Twitter?  Facebook?  Google Trends?  Linkedin profiles?  Partner inventory?  Shipment dates?

If you can see it in a web browser, whether on the public web, behind a login screen, or behind your firewall, that data can be accessed with Web Data Services to provide you with improved predictive analytics and strategic decision making.

As a fun example, Reality Buzz uses Kapow’s Web Data Server to monitor popular social media sites to evaluate America’s sentiment towards contestants on American Idol and Dancing with the Stars.  Overnight, data is collected and evaluated, and predictions are made about the fate of the contestants before the elimination show the following night.

Hundreds of businesses incorporate Kapow’s Web Data Server solutions to improve competitiveness, product offerings, and strategic decision making.  You can too.  What are you waiting for?

By:  Rick Kawamura Rick Kawamura

Tagged with:             

The Kapow Katalyst Blog is…

... a collection of insights, perspectives, and thought leadership around Application Integration.

Comments, Feedback, Contact Us:

blog at kapowsoftware.com

Get Our RSS Feed