Posted on Leave a comment

CH32V203C8 & TM1637 LED Clock – Part 2

23 March 2025

Keeping the WCH-supplied SDK RTC example program handy, I will add the necessary portions to my original test program. The first task, as always, is proper initialization.

Or is it? Here is the mystery of dealing with peripherals in the “backup power domain”. It might be ticking over just fine as the rest of the chip wakes up from its slumber. But how to tell?

It seems I need a better understanding of the backup domain in general. The specific chip I’m using today, the CH32V203C8T6, is referred to within the documentation as the “CH32V20x_D6”, which is its specific classification abbreviation. This is what I call the “small 203”. It has 64 KB of flash program memory and 20 KB of SRAM. The “big 203” is either the CH32V203RB (64 pin package) or the CH32V208, available in various packages. They have a nebulous amount of flash and SRAM. It’s quite hard to tell from the documents.

But our “little 203” has ten (10) 16-bit backup data registers that are in the backup power domain and should retain their contents as long as VBAT is maintained. The bigger parts have 42 such registers. These backup data registers can optionally be reset to zeros when a “tamper” event is detected. To the shredders! We’ve been breached! No perilous secrets being kept here, so I’m not going to arm the tamper detector… just yet.

What’s odd to me is that the RTC_Init() function in the RTC_Calendar example sets backup data register 1 to the specific value 0xA1A1, as if to say, “I was here”. Yet the software never subsequently checks this location.

I’m thinking that I might keep the derived calendar values, assuming I progress to that level, in these very backup data registers. But I’m getting ahead of myself. How to properly initialize the RTC, but only if it needs it?

Assuming that the RTC will need to be initialized at least once, there has to be code to do that, even if I can’t yet determine when, exactly, to do so. So I will write a straight-through process that is proceeding as if it knows, truly, the RTC must be set up from absolute zero. It will be like booting up the original IBM PC with DOS, who always thought it was Tuesday, 1 January 1980 upon waking.

The first thing the SDK-supplied example does in its initialization is to enable the PWR and BKP clocks on the PB1 bus:

RCC_APB1PeriphClockCmd(RCC_APB1Periph_PWR | RCC_APB1Periph_BKP, ENABLE);

I seem to recall some confusion over whether or not the PWR clock actually has to be enabled or not, but that may have been specific to the -003 chips, as documented by CNLohr’s ch32v003fun repository. A very simple test indicates that it is, indeed, reset to all zeros at boot. So let’s enable them now, using the above code snippet.

The RTC, being special, does not have a peripheral clock enable bit in any of the usual places. It is controlled by the RCC’s Backup Domain Control Register (RCC_BDCTLR).

The RTC works exactly as one would suppose, generating a periodic interrupt, if so configured. Right now, I’m just resetting the RTC counter to zero, then using it as a seconds counter, and printing out the current value every time the interrupt fires. Here’s the preliminary version of the rtc_init() function:

void rtc_init(void) { // initialize on-chip real-time clock

    RCC_APB1PeriphClockCmd(RCC_APB1Periph_PWR | RCC_APB1Periph_BKP, ENABLE);
    PWR_BackupAccessCmd(ENABLE);

    BKP_DeInit();
    RCC_LSEConfig(RCC_LSE_ON);
    while(RCC_GetFlagStatus(RCC_FLAG_LSERDY) == RESET); // add time out
    RCC_RTCCLKConfig(RCC_RTCCLKSource_LSE);
    RCC_RTCCLKCmd(ENABLE);
    RTC_WaitForLastTask();
    RTC_ITConfig(RTC_IT_SEC, ENABLE);
    RTC_SetPrescaler(32767);
    RTC_WaitForLastTask();
    RTC_SetCounter(0); // set to midnight
    RTC_WaitForLastTask();

    NVIC_InitTypeDef NVIC_InitStructure = {
        .NVIC_IRQChannel = RTC_IRQn,
        .NVIC_IRQChannelPreemptionPriority = 0,
        .NVIC_IRQChannelSubPriority = 0,
        .NVIC_IRQChannelCmd = ENABLE
    };
    NVIC_Init(&NVIC_InitStructure);
}

And here is the interrupt handler, very much as it was when I lifted it directly from the SDK example code:

void RTC_IRQHandler(void) __attribute__((interrupt("WCH-Interrupt-fast")));
void RTC_IRQHandler(void) {

    volatile uint32_t rtc; // seconds from RTC

    if (RTC_GetITStatus(RTC_IT_SEC) != RESET) {  /* Seconds interrupt */
        //USART1->DATAR = '!'; // *** debug ***
        rtc = RTC_GetCounter();
        printf("RTC = %i\r\n", rtc);
    }

    if(RTC_GetITStatus(RTC_IT_ALR)!= RESET) {    /* Alarm clock interrupt */
        RTC_ClearITPendingBit(RTC_IT_ALR);
        rtc = RTC_GetCounter();
    }

    RTC_ClearITPendingBit(RTC_IT_SEC|RTC_IT_OW);
    RTC_WaitForLastTask();
}

I’ve found that there are two ways to keep track of time on a microcontroller, assuming you have a reasonably accurate time base and a periodic interrupt. One is to simply increment a counter every timer tick, which in this case is every second, and then translate that scalar value into a collection of more useful units, such as hours, minutes and seconds when needed. The second way is to do the “translation” in an incremental manner, as each tick occurs, since the typical case is advancing the seconds count and nothing more. Then you check for overflow into the minutes unit, likewise for the hours, and so on. But usually there is only ever one thing that needs updating, and this executes quite quickly with the right code.

I’ve even taken it farther and broken down the unit seconds and ten seconds groups separately, saving the nuisance of converting a binary value to decimal over and over. The same would apply to the minutes, hours and however far you want to go with it.

But first, it’s now time to start lighting up some LED segments and pretending to tell time. Then we can join the two pieces together and more properly tell the time with this circuit.

I had previously worked on a project that used a similar chip, the TM1638, with the “LED&KEY” module that has eight (8) seven-segment LED displays with decimal points, eight (8) discrete red LEDs and eight (8) momentary contact push buttons. The microcontroller interface is similar, but includes a “STB” (strobe) input that is used as a chip select line. The TM1638 can drive up to ten (10) seven segment LED displays as well as scan an 8×3 array of push button switches. While reviewing the code, it looks like I started with handling the bit wiggling interface in software, and left a note to add support for the SPI peripheral. In retrospect, I don’t think that is possible. But what do I know? I’ve been surprised by SPI hardware in the recent past.

What the interface is not is I2C. Per the data sheet: “Note: The communication method is not equal to 12C bus protocol totally because there is no slave address.” Good to know.

I’ve already set up the two GPIO pins I will need to talk to the TM1637 chip as outputs. Since there are no push buttons connected to the clock display module (yet), I won’t be needing to read back any data from the chip, so the data line can stay an output.

I’ll need to make a small adjustment to the GPIO initialization code as the TM1637 data sheet indicates that the “idle” state of both lines is high. Right now they are both low and I have no idea what the poor little chip must think of me.

Here is the summary of what the “Program Flow Chart” describes for updating the display:

Send memory write command: 0x40
Set the initial address: 0xC0
Transfer multiple words continuously: <segment patterns>
Send display control command: 0x80-0x87 = brightness, 0x88 = DISPLAY ON
Send read key command (we're not doing this one)

Right now I don’t know which address corresponds to which digit on the display. I’m also not exactly sure which bit corresponds with which LED segment. But I aim to find out. Let’s start out by sending a single bit set to all six of the available addresses, 0xC0-0xC5.

It seems I have misunderstood the part about the maximum clock frequency we can use to talk to the chip. The data sheet specifies the “Maximum clock frequency” as 500 KHz, with a 50% duty cycle; not the 250 KHz figure I quoted yesterday.

As the data line is only supposed to change when the clock line is low during normal data transmission, I will try to center the transitions within the clock pulses. With a maximum clock frequency of 500 KHz, each clock transition is 1 us apart. So to aim for the middle of the low part of the clock signal, we should wait 500 ns after the clock line goes low to update the data line. The SDK-provided delay functions, Delay_Us() and Delay_Ms(), only provide microsecond or millisecond time spans. Right now I’m only using Delay_Ms() to time the blinking of the on-board LED. It’s time to deploy some higher-resolution delay functions.

Actually, all I need to do here is to start the STK system timer in free-running mode at the full system frequency of 144 MHz to get ~6.9444… ns resolution. Then I can just pass in the number of clock cycles I want to waste in the delay, add that number to the current STK counter value, then wait for the STK counter to exceed that number. Here’s the STK initialization code:

#define STK_STE (1 << 0) // STK enable bit in CTLR

void stk_init(void) { // initialize system timer

    SysTick->CNT = 0; // reset counter
    SysTick->CTLR = SysTick_CLKSource_HCLK | STK_STE; // enable STK with HCLK/1 input
}

I had to #define the counter enable bit for the CTLR because it’s not #define’d anywhere else. The SysTick_CLKSource_HCLK value happened to be available in the RCC header file.

And here’s the actual delay() function code:

#define NS /7 // STK tick factor for nanoseconds
#define US *144 // STK tick factor for microseconds
#define MS *144000 // STK tick factor for milliseconds

void delay(uint32_t delay_time) { // delay for 'delay_time' clock ticks

    if(delay_time == 0) return; // already late

    uint64_t end = delay_time + SysTick->CNT; // calculate time to end
    while(end > SysTick->CNT) {
        // just wait
    }
}

Using the #define’d units NS, US or MS for nanoseconds, microseconds and milliseconds, respectively, you can eloquently express your desired delay time:

delay(500 NS);
delay(250 MS);
et cetera
Leave a Reply