Mid-September. Two prominent U.S. Senators were proposing local choice – a la carte for broadcast stations.
Aereo formally petitioned the FCC to be designated an MVPD. FCC Chairman Tom Wheeler was threatening to reclassify broadband as a Title II service. HBO executives were secretly preparing to offer a direct-to-consumer over the top service. Somebody decided to drop the word “cable” from the cable industry’s signature annual event, renaming it the Internet and Television Expo (INTX).
With the entire pay TV universe being shaken, it was a comfort to come back home, to Denver, the cradle of the cable industry, for Cable-Tec Expo, where taking care of the nuts and bolts of the cable business was the order of the day.
Today it remains somewhat murky where the whole pay TV business is going, or precisely what it will look like when it gets there, but MSOs know for certain they need to prepare their networks to deliver whatever has to be delivered, and Cable-Tec Expo made it crystal clear that those networks will still depend heavily on DOCSIS 3.1.
Google wants to spur demand for gigabit broadband, and it has succeeded.
Cities are fighting each other to get Google Fiber, other cities with no chance yet at Google Fiber are begging competing ISPs to install gigabit broadband, and many ISPs have either begun or have promised to start rolling out gigabit services, either responding to Google’s (or someone else’s) gigabit offering, or just to milk their most lucrative markets.
Most MSOs making gigabit promises expect to make good with DOCSIS 3.1, aka Gigasphere – a term still rarely heard in sessions, on the show floor, or in conversations over cocktails.
There’s some skepticism in the cable industry that most of the people who say they want gigabit broadband really and truly need it for anything any time soon, and even less expectation those who say they want it will be willing to pay for it if it were to made immediately available. Someday, maybe, but not now.
A lack of enthusiasm for gigabit delivery doesn’t do anything to undermine the move to DOCSIS 3.1, however. Researchers are demonstrating that D3.1 will deliver some very important subsidiary benefits. Doing little more with your network than populating it with D3.1 modems can lead to substantial benefits.
In the pre-conference DOCSIS 3.1 and Wireless Symposium, Belal Hamzeh, director and principal architect of CableLabs, reported that 30 percent of a network’s QAMs would be able to go to 2048 QAM simply because the network is populated with D3.1 cable modems.
It is not necessary to swap out all 3.0 cable modems for 3.1-compliant models to get the benefits, though the more the better.
Capacity gain evaluation – downstream channel
“Take your system, with no changes except putting in 3.1, you can get a 46 percent expansion of capacity over the DOCSIS 3.0 network,” he said.
“It will be a long time before we have no 3.0 modems in the network,” said Jorge Salinger, Comcast’s vice president, access architecture, but still.
He said that this demonstrates the value of the backward compatibility built in to D3.1, though. He noted that it’s easy to run 3.0 and 3.1 cable modems at the same time Further capacity gains can be reaped by improving signal to noise ratios (SNR) on a network.
“The great thing about 3.1 is that the more you improve your signal to noise ratio, the higher orders of modulation you can go to,” Hamzeh explained, leading to additional capacity increases.
Hamzeh said that improving SNR by 3 dB earns you almost nothing on a network equipped with D3 modems, but with D3.1, “Every 3dB improvement in your network will give you about 10 percent increase in capacity.”
Comcast expects to start trialing DOCSIS 3.1 technology in the second half of next year, shortly after D3.1 headend equipment becomes available.
In anticipation, Comcast is about to start characterizing SNR throughout its network, according to Salinger.
[For more on the subject, listen to our webcast Winning with DOCSIS 3.1 — Just Add Cable Modems, available on demand for a limited time]
Wi-Fi: why fight it?
The market drivers for cable Wi-Fi haven’t changed much, though one was recently imbued with some notable extra weight. They include satisfying subscriber demand for broadband via Wi-Fi, keeping competitive with LTE, and subscriber retention, among others. T-Mobile had already endorsed dual-mode (LTE/Wi-Fi) connectivity, but when Apple recently enabled dual-mode operation in iPhones, other carriers chose to support the feature, which will serve to make Wi-Fi a more integral component of the wireless ecosystem.
Cable’s wireless play is Wi-Fi, and it’s important for cable to own the Wi- Fi element of the ecosystem. “Voice is becoming a critical application for your subscribers, whether you’re ready for it or not,” noted Vincent Pandolfi, consulting systems engineer, Cisco Systems.
There just hasn’t been much money in Wi-Fi.
There are ways to make money, but some of them will depend on cutting business deals with erstwhile rivals.
Assuring voice-over-Wi-Fi quality becomes a path to monetization, said Jeffrey Valley, senior director of consulting engineering, Alcatel-Lucent IP Routing & Transport, explaining that cable operators can cut deals with mobile network operators (MNOs) to provide quality of service (QoS).
“What happens in the background, these devices will establish a tunnel through the network – your network – to a gateway,” Valley said.
“The device is still getting an IP address from its carrier. So when you talk about voice over Wi-Fi, or video over Wi-Fi – okay, so as a tunnel provider, how do I ensure QoS? That requires a relationship between MNO and MSO.
Become a reliable roaming partner.
That leads to monetization,” he said.
Then the issue is seamless connection from Wi-Fi to LTE, said Manish Jindal, vice president and CTO, broadband and media accounts customer unit, Ericsson North America.
“End users should see no difference in experience,” Jindal said.
“When we can do that, as cable operators, then we have the wireless advantage.
That’s why we need to optimize our networks, so users can’t tell the difference between them.”
“Historically missing is how to find a network, and how to authenticate,” said Justin Colwell, VP, access network technologies, CableLabs. “Most of you flew into Denver, turned on your phone, and you found your network and were authenticated. We think Wi- Fi should be a similar experience.”
“Get off a plane, authenticate, yeah that’s great,” said Valley, but what is needed beyond that is subscriber management, and that’s still largely lacking.
“You should be able to bring policy and account information with the user,” he said.
What that allows you to do is treat your customers to additional experiences.
For example, parental controls today, as soon as you walk into a Wi-Fi hotspot, you get a new IP address, and none of the policy is sticky, he said. Policy controls should follow the customer.
Patrick Kaiser, director, wireless product marketing, Huawei Technologies, thinks that there can be a great business at managing small cells – and that cable is in the best position in the U.S.
to do that.
“The success we see is the sharing model,” Kaiser said. “In cellular, sharing is common in Europe – they share cell sites and cell towers. There’s a huge benefit in controlling expense. The same model in the U.S. should help.”
U.S. cable and U.S. MNOs ought to share in-building infrastructure, he said, especially in in-building cells (80 percent of all data is generated in-building, he noted). The idea is to put in a small cell that supports both Wi-Fi and LTE and charge for managing the cell.
“You can’t have five competitors offering in-building in the same space. Cable should have a leg up, but partnership is what we think would work best.”
Community Wi-Fi
Community-wide Wi-Fi services depend on a critical mass of installed modems, each partitioned so that most of the modem’s capacity remains dedicated to the use of the private subscriber, with the remainder reserved by the MSO for public access.
That already exists, attracting the attention of a growing number of service providers, including Comcast’s Xfinity, Cablevision Systems, FON and others.
Community-wide ser vices sans congestion and replete with seamless connectivity, though?
Working on it.
CableLabs’ system engineer Vivek Ganti said, “Community Wi-Fi will allow service operators to leverage bandwidth for the private and public networks and for roaming partners. Sounds fantastic, but that’s Community Wi-Fi.”
Seamless connectivity is possible, but first there are some pain points that need to be addressed, Ganti said. “We must make sure there is no mix-up of private and public data. And we must make sure the network is capable of inter-connectivity.
And QoS is a challenge as well.”
Community Wi-Fi QoS essentially means assuring that private clients are not affected by public clients, Ganti said.
Other challenges such as radio resource management, network selection and mobility, security and traffic management and prioritization are also on Ganti’s list of challenges.
“There is also a dire need for upstream QoS in Wi-Fi.
We’re testing best efforts for Community Wi-Fi adoption now at CableLabs,” he added.
IPv6 is also part of the Community Wi-Fi equation, Ganti concluded.
“Operators are upgrading their Community Wi-Fi networks to IPv6-only with dual stacks and the technology is available for MSOs to deploy Community Wi- Fi. And businesses are supporting Community Wi- Fi. There are business decisions to be made, some technical and operational issues to be solved and checks and balances to be put in place, but Community Wi-Fi is here and available.”
Carrier Grade Wi-Fi
The next step beyond Community Wi-Fi is creating a wireless service similar, and perhaps eventually indistinguishable from, the cellular experience – Carrier Wi-Fi.
CableLab’s Mark Poletti and ZCorum’s Scott Helms went over how to build better Wi-Fi networks, both in public places and in customers’ homes.
Poletti, CableLabs’ lead wireless architect, said Carrier Wi-Fi touches the entire Wi-Fi ecosystem, including access point (AP) vendors, AP controllers and servers that can provide data analytics.
The IEEE, Wi-Fi Alliance, and Wireless Broadband Alliance (WBA) have been working on Carrier Wi-Fi requirements and standards for the past few years with three goals in mind: consistent user experience, network management and fully integrated end-to-end networks.
With the building blocks in place in the core network, Poletti said Wi-Fi operators could build large-scale Carrier Wi-Fi networks that meet the three goals.
In addition to the various “alphabet” flavors of Wi-Fi, Carrier Wi-Fi also includes traffic prioritization and procedures for time sensitive applications, jitter, latency and packet loss, as well as Wi-Fi certified products from the WFA.
An element supporting Carrier Wi-Fi is Hotspot 2.0, which allows mobile devices to automatically join a Wi-Fi network based upon preferences and network optimization whenever the user enters a Hotspot 2.0-enabled area. Hotspot 2.0 brings cellular like capabilities to Wi-Fi users by enabling them to log in one time instead of entering their passwords at every access point when they come in range.
Hotspot i s the technical specification that the WiFi Alliance uses for hardware, while Passpoint is the certification process to that is needed to make sure the hardware is Hotspot 2.0-compliant.
Helms, ZCorum’s vice president of technology, spoke about the lessons learned from a project that used DOCSIS proactive network maintenance (PNM) practices for in-home Wi-Fi networks.
When subscribers call in with problems with their data services, Helms said, a large proportion of the calls were from end users who had problems with their Wi- Fi networks. By finding potential channel interferences issues from the likes of baby monitors or radio-controlled cars.
With more analytics at their disposal, technicians and customer support can do a better job of finding and fixing interference issues in home Wi-Fi networks.
Another problem with Wi-Fi is that there are so many different profiles (B, G, N, AC), and so much variation in how they are implemented in CPE, with the result being that quality from one CPE device to the next can be wildly uneven.
To that end, CableLabs is now conducting performance tests on access point testing, and is also characterizing Wi-Fi clients. “We invite all of you to submit access points so we, as an industry, can improve Wi-Fi,” Colwell said.
Wi-Fi SON (self optimizing networks) is also evolving as an operational and capital expense play. It provides a visualization of customers’ problems and prioritizes them.
“It’s a mobility play for cable and will help MSOs with their operational and capital expenses by getting the best value from the next access points,” said Ken Roulier, CTO of the broadband, cable and satellite division at Amdocs.
Hybrid cloud
Working both the public and private sides of the Internet looks like a winning strategy not just for Wi-Fi either; it’s looking like the way to go with all cloudbased operations. Experience is showing that cable operators would do well to go with a hybrid cloud model, for a variety of reasons.
Cisco cloud architect Geoff Arnold said the hybrid cloud model would be the norm going forward, but the various cloudbased technologies and applications need to be knitted together.
Comcast chief technology officer Tony Werner (who moderated the panel discussion) and fellow Comcast employee Mark Muehl, senior vice president, product engineering, said Comcast initially went with more of a private cloud several years ago, but switched after finding drawbacks in a homogeneous model. For example, Werner said, one disruptive Linux kernel would propagate itself unchecked across various systems.
Since then, Muehl said, Comcast has gone with a heterogeneous approach for cloud infrastructure, which includes a deal with Amazon that provides Comcast with cloud insurance and a playground for prototyping applications.
Werner asked Arnold what role the cloud would play going forward.
“The disruption is really at both the application and the infrastructure levels,” Arnold said.
“On the application level it’s obvious that we can deploy new apps much more rapidly because we don’t have to go through the lengthy procedures for provisioning and the human processes.
“There’s also disruption at the infrastructure level because we can slide in the new technologies very rapidly without having to negotiate the applications up and down the stack,” Arnold continued. “In both cases that means that we need decision making that can keep up with the technology. Organizational and business decision making and operational decision making so we can move as fast as the cloud allows us to.”
The key to enabling cloudbased rollouts and apps is getting former disparate teams on the same page. In isolation, they have different goals, and that creates friction.
“In the dev ops environment one of the things we learned over the past few years is when we really step on the gas for dev ops at Comcast is that when you have an ops team and dev team you have two different groups of teams, at least, with different incentives,” Muehl said. “With the ops team it’s typically focused on keeping it stable, and making sure we have a great experience with our customer.
Then we have a dev ops team that is trying to get a feature out.
“By blurring the lines between those two different organizations and two different goals,” he continued, “by uniting the goals, we get a great experience out for the customers that is stable. By getting everyone on the same page we remove that friction.”
Nick Barcet, vice president, products and pre-sales, eNovance/ Red Hat said that before a company comes to him wanting a cloud-based service they first need to figure out what their business needs are.
“I’ve seen many customers that come to us that want a cloud,” he said.
“I spend a lot of time asking them what their business needs are. If they can’t explain their business needs, I can’t help them. By concentrating on the business outcomes, that’s how the cloud becomes successful.”
Other keys to cloud-based apps and services include building apps with self-resiliency and forcing the apps to fail in order to find software failures, according to Arnold and Barcet.
Werner concluded, “We’re still at the very beginning, and I think the balance between how much freedom you give developers versus how much security are things that are going to continue to ebb and flow as we find different models. It’s an exciting time.
I think it’s not only important to our future but it’s imperative.”
It had to be you
Viewing behaviors are changing rapidly as more content is delivered across multiple devices, and for content providers to keep up with those changing behaviors, a series of personalized solutions must be integrated into the viewing experience.
“The Holy Grail is for consumers to watch more content than before. That’s where the business value is – recommending content associated with behaviors,” Christy Martin, chief technology advisor for ThinkAnalytics, explained.
Content, viewing context, viewer preference, and commercial objectives are the four key components to a personalized solution, Martin noted. But first, people must be able to find content.
“Content is the first building block of effective content associations, and that includes metadata – both business and technical – along with user generated data and social media data. The purpose is to follow the natural behavior of viewers,” Martin said.
A phased integration of metadata and the addition of key architectural components were strongly recommended by Martin. “Don’t think small and understand the whole problem.
Work on the back end and grow the system around that.”
Growing the system will take increased engagement by viewers and streamlining the way they find content, according to panelist Tom Kuppinen, manager of sales engineering for Digitalsmiths.
“Customers are not engaged with content because they can’t find it. We need to let content find the viewer and help them find new ways to find it,” Kuppinen said.
“One way is through “personalized discovery,” he explained.
“People like what’s popular and similar to content that they’ve watched before. The challenge is to have a deep understanding of metadata. But the volume of data has increased and the velocity is changing constantly, and the variety as well.”
Kuppinen pointed to asset metadata as a vital component to content personalization.
Viewers do not want to have to find content in different ways on different platforms. It’s imperative to provide a streamlined user interface that’s consistent across multiple devices, but that continues to be a challenge for service providers.
Piers Lingle, VP of product development for Comcast Cable, said, “We must be able to say here are the one or two products that will fit certain criteria for a modern user interface.”
Easier said than done, of course.
“We ask what customers are doing and use trust-but-verify methods. But there’s never just one right answer. That makes it harder for product developers,” Lingle said.
When Cox Communications was developing its Contour guide, it dove deep to get qualitative and quantitative data to discover what was common about how different types of viewers use their guides. T.S. Balaji, Cox Communications’ senior director of commercial services and user experiences, said Cox broke down the behaviors of several video customers and found a wide range of users, which included enthusiasts, parents, young adults, series followers and variety viewers.
“All of these profiles have four objectives,” he said. “Content (access), control over when and where they watch content, finding content quick and easy, and ease.
Don’t make users think or work at finding content.”
The ultimate goal for the modern user interface, Balaji concluded, is reliability, content, navigation.
Data, and big data
Improving the customer experience by leveraging a network’s infrastructure and using the data generated by today’s networks is proving to be not only a cost-saving model, but enhancing the customer experience.
“The telecom industry spent $68 billion in 2012 in capital expense, and since 1996 has spent $1.2 trillion. That’s far too large a sum to give poor customer experiences,” said Mark Geiger, senior manager of HFC design and support for Cox Communications.
There are challenges such as assessing the large quantity of data being produced, customer demands and expectations and the evolving competitive landscape.
But Geiger believes there are solutions, including GIS (Geographic Information Systems).
“It tells us where to build, how to build, what’s been built and how best to deliver services.
It also allows us to do network diagnostics to identify things like power outages,” he explained.
“We’ve saved millions of dollars when automating with GIS, especially in our business services segment. It’s a huge game-changer,” Geiger concluded.
“There’s huge cost savings to developing root cause analysis.
Operators are savings truck rolls and reducing customer calls significantly with real-time analysis,” said Brennen Lynch, product manager for Guavus.
Optimizing the customer experience in today’s connected home by using analytics is another huge upside to real-time analysis, according to Alan Marks, senior marketing manager for Alcatel- Lucent. But there are challenges.
“There’s a whole generation of people using devices in the home in different ways, so it’s no surprise there are complexities.
For instance, on a weekend day, 40 percent of broadband traffic comes from streaming video. We want to collect and understand the data from these devices and assess the data from those services to better understand a customer’s quality of experience,” he explained.
The process he outlined: raw data is collected, then filtered and normalized against device templates and mapped against use case templates. Events can then be visualized in business intelligence tools or can generate alerts.
The future for big data and analytics, Marks concluded, is predictive analysis. “That’s where we need to go. Correlating all of the information is our ultimate goal.”