Sunday, May 24, 2015

Step Zero for or1200_uvm

Here is the quick update and summary:
  1. I setup my development machine using Amazon AWS (EC2)
  2. I started reading the documentation for the the or1200 processor
  3. I've selected my first interface to target
  4. Finally created an account at github
Based on my reading of the or1200 processor it uses a wishbone interface to reach imem (instruction memory) and dmem (data memory). The wishbone interface is heavily used in open source hardware designs. I don't want to get too far ahead of myself but... If I complete a functional verification environment for the or1200 processor then the next step would be SOC level verification. My limited understanding says that most peripherals use the wishbone interface also.

So I've concluded that a worthy first task is to invest the effort into making an extremely robust wishbone uvm_agent. I am going to have an agent that has API for all of the low level transactions then a testbench/testcase developer can write more complicated protocols on top of my implementation. I also want to take the opportunity to apply my SystemVerilog Assertions course knowledge in the wishbone interface to verify low-level protocol correctness. As of this post I am still laying-out the "boiler plate" UVM structure but I hope to eventually have a functional MASTER to SLAVE loopback type of test up and running.

I'll continue to post as I make progress.

Saturday, April 11, 2015

My New Project

In my last post I discussed how the "autotune" pitch correction projects was taking too much effort outside of the core skills that I wanted to practice.

I've decided to start working on verification of the OpenRISC cores. This way I can completely focus on the DUT.  I am going to start on the OR1200 core as it seems stable and not under active development anymore.  This feels like the best project to increase my chances of breaking into the hardware verification field.  CPU verification is one of the more common opportunities I've recently seen for verification engineers and many companies are deciding to use their own CPU solutions which require their own verification efforts.

The OpenRISC community seems to be very active and accessible which is an important factor.  This project idea is great because I can start with block verification, move to full CPU verification and even do verification efforts at the SoC level.  There is an existing set of tools for verification at the different levels but I am going to write my own because I will be able to refresh on my UVM skills while getting familiar with CPU designs.

I've been browsing through the OR1200 spec and the HDL.  The first obvious course of action is to develop the verification tools around the Wishbone interface.  In OR1200 wishbone is used for the CPU to access main memory.

The Wishbone specification is well documented and there are some HDL BFMs on github so it will help me rampup.  I plan to write a SystemVerilog/UVM testbench for this. The interface will have SystemVerilog Assertions to catch protocol violations and the UVM components will be capable of sending and receiving Wishbone transactions.

Ill update once i have something to share.

Friday, March 13, 2015

Back from the dead

I stopped updating this blog because I had an interview for my dream job. It required a lot of studying so I had no time to move the auto tune project forward.

I was really close to landing the job but it fell through. After some reflection I think it is best that I take up a pure verification project. The auto tune project is cool but it pulls my attention in too many directions (RTL, Signal Processing algorithms,  etc).

I'm probably going to take a project from open cores and write a UVM testbench around it. This way I can focus more on  the skills that I am trying to sell. Maybe one day I can complete the auto tune project but for now it's being retired.

Sunday, January 25, 2015

Project Update

Phase Vocoder

Since the last project update I've been trying to get a better understanding of the phase vocoder and how to use it for automatic pitch correction. I have some basic code running.

Figure 1: The top plot is the input signal and the bottom plot is the time stretched version 


Figure 2: The top plot is the input signal and the bottom plot is the time compressed version

The non-trivial aspect of these stretch and compression transformations is that the frequency is maintained in the output signal. This is a crucial intermediate step before doing pitch correction. The only remaining step for is a change in the sample rate to return it to the same time duration as the input signal.

I implemented the basic Phase Vocoder as covered in the paper "Improved Phase Vocoder Time-Scale Modification of Audio" by Jean Laroche and Mark Dolson.

My understanding of the algorithm is the following:
  1. Divide the input signal into frames. These frames generally are short duration (i used 512 samples). A given frames start time will overlap with the previous frame, creating a sort of signal redundancy.
  2. Each frame is moved to the frequency domain using the DTF/FFT.
  3. When generating the frequency domain representation of the output signal the magnitude of each FFT will be preserved but the phase will calculated based on the estimated frequency of the "bin" and how much time stretching/compression will be done.
  4. Each output frequency domain will be transformed back to time domain frames by using the IFFT/IDFT
  5. The output frames will be turned into a time stream by using an overlap and add method

Sunday, January 11, 2015

Project Update

Results from the DUT (interpolation filtering block)

Based on the last blog update I said that I need to finish the interpolation_filtering block soon because the project is not moving quick enough.

This weekend I completed the scoreboard for the block. The only pending issue is the rounding/scaling sub-block but I am going to move on anyway. If this proves to be a serious issue then I will come back to it.

I ran a test where I sent 500 samples to the block. Because of resampleing and filtering the block outputs 120 samples. The DUT and model produce the same results. It seems like everything is working well.

# UVM_INFO C:/questasim_10.2c/examples/julian/projects/autotune/src/tb/uvm/interpolation_filter/scoreboard_interpolation_filter.sv(174) @ 2500550.0ns: uvm_test_top.env0.sb [SCOREBOARD_CMP] tb_top_interpolation_filter.scoreboard_interpolation_filter.check_phase : dut_sample_data = 00000000, model_sample_data = 00000000, dut_sample_width = 00000012, model_sample_width = 00000012
# UVM_INFO C:/questasim_10.2c/examples/julian/projects/autotune/src/tb/uvm/interpolation_filter/scoreboard_interpolation_filter.sv(177) @ 2500550.0ns: uvm_test_top.env0.sb [SCOREBOARD_MATCH] tb_top_interpolation_filter.scoreboard_interpolation_filter.check_phase


# UVM_INFO C:/questasim_10.2c/examples/julian/projects/autotune/src/tb/uvm/interpolation_filter/scoreboard_interpolation_filter.sv(174) @ 2500550.0ns: uvm_test_top.env0.sb [SCOREBOARD_CMP] tb_top_interpolation_filter.scoreboard_interpolation_filter.check_phase : dut_sample_data = 0000000a, model_sample_data = 0000000a, dut_sample_width = 00000012, model_sample_width = 00000012
# UVM_INFO C:/questasim_10.2c/examples/julian/projects/autotune/src/tb/uvm/interpolation_filter/scoreboard_interpolation_filter.sv(177) @ 2500550.0ns: uvm_test_top.env0.sb [SCOREBOARD_MATCH] tb_top_interpolation_filter.scoreboard_interpolation_filter.check_phase


# [SCOREBOARD_CMP]   120
# [SCOREBOARD_MATCH]   120
# [TESTCASE_PASS]     1


Going Forward

I have to start spec-ing out the "Processing Engine" block. This block will take the audio samples from the interpolation filter and do the pitch correction on the stream and finally write the samples to memory. At a basic algorithm level I still have many questions about how this will work. The next few weeks I will be spending in GNU Octave to prototype the algorithm. This block will include complex numbers, FFT, trigonometric calculations so there should be lots of fun implementing it.

Tuesday, January 6, 2015

Project Update

I haven't provided an update for a couple of weeks. Here is the progress since the last update:
  • I've learned the basics of DPI. This includes the SystemVerilog coding, C coding and build script differences
  • I've defined a basic interface between SV and C. 
    • 1 API to send data to C. 
    • 1 API to run the calculations. 
    • 1 API to retrieve the data from C
  • I've implemented all of the sub-blocks included in the interpolation filter
    • Zero Stuffing
    • Filtering
    • Decimation
    • Rounding / Scaling
Looking forward:
  • I really need to finish off this block because the overall project is moving slower than I expected
  • Finalize the filter coefficients I've chosen to do low pass filtering and linear interpolation
  • Compare the RTL outputs with the C-Model outputs and find any discrepancies between the 2
  • I might need to re-design the rounding / scaling block. It is intended to keep the signal within the limits that can be represented with a fixed point system but that is not how it is implemented :(
  • MOVE ON TO THE NEXT BLOCK :)