Hero Image

12th March 2026

Why 24/7 Performance Monitoring is critical for solar asset revenue

1500 hours. This is the (perhaps generous) average number of sunlight hours the UK gets each year. With this in mind, it’s crucial for utility-scale PV to maximise production by maintaining availability.

Out of these 1,500 hours – how many is your site actually generating? We have to remember that even a site with a rigorous preventative maintenance schedule and a competent O&M provider still experiences disruptions.

How exactly will 24/7 Performance Monitoring protect the financial health of your asset? We sat down with Kieran Hill-Cousins, our Monitoring and Performance Manager, to find out. Read on to discover how you can protect your solar asset revenue.

It’s easy to forget sometimes that your asset doesn’t exist in isolation from the rest of the world and is vulnerable to disruptions along the electrical grid. The grid can be impacted by weather-related events, such as storms, which can knock out your energy generation and are completely out of your control. A chain is only as strong as its weakest link after all.

If we focus on one part of the wider electrical ecosystem, such as overhead powerlines, we start to get a sense of how vulnerable our assets may be. According to the Royal Meteorological Society, UK electricity networks experience roughly one weather-related fault per 100 km of overhead line annually, a number that doubles or triples during stormy years.

With roughly 285,000 km of overhead lines covering the UK, this amounts to almost 6000 weather-related faults a year, rising to roughly 8,500 during stormy years.

This level of unexpected downtime is bound to have a financial impact on any generating asset. One of the key purposes of effective Performance Monitoring is to help minimise that downtime.

The scenario

Let’s take a look at an example that illustrates the difference 24/7 Performance Monitoring can make.

A solar site experiences a grid event at 3:00 am on Saturday morning during summer, causing the incoming supply to the site to be stopped. Without supply to the ancillaries, communication to the site and the security system soon stop functioning, and the site is unable to export due to the event.

What kind of response might you get from your O&M provider?

  • Response 1 - Eight hours of lost generation

    O&M #1 has manned operations Monday to Friday, 8:00 to 17:00, with an on-call engineer on the weekend. The on-call engineer wakes up at 7 am on Saturday, before noticing the notifications of a trip at 7:15.

    They go onto the SCADA remotely to investigate. Eventually, they deduce that it was likely to be caused by some event outside of the solar farm and contact the relevant DNO to confirm. Some O&Ms can utilise AI tools to make recommendations during this diagnostic stage, which could cut about half an hour off the process.

    They are informed that a DNO engineer will need to visit the site to close the relevant breaker. At about 9 am, the on-call O&M engineer gets ready and begins the four-hour journey to the site.

    The O&M and DNO engineers both arrive at the site and reenergise, with the site coming back online at 14:00.

  • Response 2 - Five and a half hours of lost generation

    O&M #2 has a 24/7 staffed Control Room.

    An operator notices the trip via the SCADA system at 3:05 am and begins the investigation. At 3:15 am, they confirm it is likely a grid event and contact the relevant DNO to confirm this is the case, and that this will require a DNO engineer to attend.

    Further conversations with the DNO Control Room show that a DNO engineer can be on site at 11 am. The 24/7 Control Room prepares an information pack for the on-call engineer, informing them of the trip, the cause, and the planned solution.

    The next shift speaks with the on-call engineer at 7 am to double-check that they have everything they need.

    The on-call engineer leaves home at 7:30 and heads to the site, and at 11:30, the site is reenergised.

We can see that there is at least a two-and-a-half-hour difference in availability between these two scenarios. This may not seem like a huge amount, but it adds up quickly over time and across a portfolio.

A component that results in 30% less availability would be replaced quickly, and your O&M provider should be no different. Lost revenue should be addressed on all fronts.

There is also a big difference in how the asset manager experiences each scenario. In the first case, they experience more or less the entire fault resolution process, and the stress that comes with it. However, in the case of the 24/7 Control Room, the asset manager is greeted with a solution to a problem they didn’t even know they had.

 

These examples are far from hypothetical, and we have seen more than our fair share of energy-generating assets being knocked out by grid events. The job of a Performance Monitoring provider should not be to pass problems up the chain but to work in partnership with the asset manager to keep their site operating at peak performance.

A little bit of downtime can be a big problem for solar asset revenue

Major faults may grab attention, but for most solar farms and battery storage facilities, the biggest threat to long-term performance is the accumulation of smaller outages and delays.

A grid event here, a comms fault there, a delayed inverter repair, none of these seem too significant in isolation. However, over the operational lifespan of your asset, these moments of unavailability add up to meaningful lost generation and revenue.

The reality is that the grid infrastructure your asset relies on is ageing and undergoing significant upgrades across the UK. Both of these factors increase the likelihood of disruption to generation.

A recent report by grid connection specialists Roadnight Taylor showed that nearly 10% of renewables projects in England and Wales will experience, on average, four weeks of downtime a year due to planned grid works.

When you combine this with component failures, grid events, and curtailment, it becomes clear that availability cannot be left to chance.

Without a strong O&M strategy, supported by experienced performance monitoring, many assets slowly lose performance through a series of small, avoidable losses, a classic case of death by a thousand cuts.

Turning performance monitoring data into better asset management

A competent Performance Monitoring team will have many tools at their disposal, but the most valuable of all is experience. Your SCADA may report faults, but it’s the person in front of the screen who diagnoses the cause.

A 24/7 Control Room like Ethical Power’s can process over a million data points an hour, but without a team of well-trained and experienced renewable energy experts to make sense of it all, it’s just noise.

This combination of data and experience delivers more than just rapid fault resolution; it also protects your revenue when it comes to supporting commercial asset management.

Accurately classifying and reporting outages helps you:

  • secure compensation from the DNO for grid events
  • benchmark your performance
  • budget and forecast with confidence

In order to link technical performance to financial outcomes, you need to be fully confident in your understanding of that technical performance. 24/7 Performance Monitoring gives you that confidence.

 

With real‑time insight, expert analysis and proactive fault resolution built into the way your asset is managed, you’re not just reacting to problems, you’re minimising your exposure to financial impact.

If you’re serious about protecting performance and safeguarding every megawatt of revenue, it’s time to rethink how you monitor your assets.

Contact Ethical Power to see how 24/7 performance monitoring can reduce downtime, improve availability, and support stronger financial outcomes for your portfolio.