Would like to know the recommend ADC configuration for correct and stable DC measurements with SDK 3.0.8.0
Based on the information available in "UM-B-004 DA14580_581 Peripheral Drivers_v1.4.pdf" and "DA14580_DS_v3.1.pdf" my configuration currently looks like this:
adc_calibrate()
adc_init(GP_ADC_SE | GP_ADC_CHOP, ADC_POLARITY_UNSIGNED, 0 /* no GP_ADC_ATTN3X*/ )
adc_enable_channel(ADC_CHANNEL_P00)
adc_get_sample()
adc_disable()
Looking at adc_get_vbat_sample() it seams that :
1. GP_ADC_CHOP usage is not preferred, although highly recommended in "DA14580_DS_v3.1.pdf", and instead two samples with opposite GP_ADC_SIGN are used
2. a 20 microseconds delay is used right after adc_init() which maps with the timing characteristics described in "Table 293" from "DA14580_DS_v3.1.pdf" (Note: same 20 microseconds delay is being used after adc_int() in adc_calibrate()).
1. Is the 20 microseconds delay needed after an adc_init() ?
2. Are/should the necessary delays be controlled via GP_ADC_DELAY_REG and GP_ADC_DELAY2_REG taking in consideration that adc_int() sets GP_ADC_DELAY_EN ?
3. Is there more information available about the inner workings of the ADC related registers ?
Hi Icujba, In case you do not use attenuation no delay is required. With attenuation (required in case you intend to measure voltages > 1.2V) you should add some delay to allow the internal caps to be charged. This should be more in the range of 4us. There is more information regarding the inner workings of the ADC this is however not available pubicly. Do you have some specific questions?
Best regards, RvA
Hi RvA,
谢谢你的回复。我将使用建议ed delay for the "with attenuation" scenario.
Best regards, Ionut
Hi,
I have really simple code and I do not get consistent results from the ADC (at least lowest 4 bits shall discarded, so the effective resolution is 6 bits, you had make that also in battery.c) and looks like that ADC has also quite huge offset-error and the calibration does not help. The offset error is the main problem especially at lower voltages...
My test setup is really simple: external_power--->100k ohm resistor ---> Input pin (configured as PID_ADC). Code uses your examples as is so the adc functions has not been modified (except at last stage is added CHOP bit)
uint16 u16ADC = 0U;
uint16 i = 1000U;
uint16 u16Min = 0xFFFFU;
uint16 u16Max = 0U;
DBG_PRINT("\n\rADC att 1x, calib before meas, 1000x single meas\n\r");
adc_enable_channel(ADC_CHANNEL_VDD_REF);
adc_calibrate();
adc_init(GP_ADC_SE, 0, 0); // single ended mode, 1V18
adc_enable_channel(ADC_MEAS_TEMP);
while(i--)
{
u16ADC = adc_get_sample();
if( u16ADC < u16Min ){u16Min = u16ADC;}
if( u16ADC > u16Max ){u16Max = u16ADC;}
}
adc_disable();
DBG_PRINT( "ADC min: 0x%x, max: 0x%x, diff %u\r\n", u16Min, u16Max, u16Max-u16Min );
With 50mV the recorded maximum value is correct, rest are too low:
ADC att 1x, calib before meas, 1000x single meas
ADC min: 0xf, max: 0x3b, diff 44
I also tried to average 10 samples but the results has still ~17% error at 50mV, 140mV has ~5,5% error and 550mV 1,7% error, minimum error (~1,25%) is reached at level ~700mV after that error looks to stay stable...
Raising the voltage does not help, the difference between min/max samples stays same (and again the highest ones are correct)
ADC att 1x, calib before meas, 1000x single meas
ADC min: 0x150, max: 0x17a, diff 42
ADC calibration registers values varies between that range after calibration...
OFFP: 0x204-0x208
OFFN: 0x1f8-0x1FA
ADC功能不是在描述数据sheet, so I would like to get all material what you had for it. There are also some register bits (GP_ADC_CHOP, GP_ADC_IDYN...) which may do something but according to your reference design (battery is quite stable DC-source?) those are not used...
Enabling the GP_ADC_CHOP in adc_init() then the results are better in terms of min/max difference but now max is not so close the actual value...
ADC att 1x, calib before meas, 1000x single meas
ADC min: 0x158, max: 0x176, diff 30
I'll guess that GP_ADC_CHOP shall be definitely used. Can GP_ADC_I20U & GP_ADC_IDYN be enabled simultaneously what is the difference between constant vs dynamic ?
Sorry about saying that maximum value is correct, I looked the diff at 50mV the 44dec as ad conversion result is 50mV assuming 1,18V ref. So there are no ways to determine the "correct" value in sw...
I'll guess that I figured out some parts of the problem (large difference between min&max), there was noise from external-power. I added capasitor to measurement end which solved that one. How ever there still looks to be offset error, is this a feature? Error looks to be ~5%-2% being highest in lower end...
mV print is converted using following formula AVG*1180/1023 and ADC values are now decimals
72mV (measured) gives (CHOP_ON): ADC min: 65, max: 67, diff 2, AVG: 66 mV: 76
366mV (measured) gives (CHOP_ON): ADC min: 314, max: 316, diff 2, AVG: 314 mV: 362
692mV (measured) gives (CHOP_ON): ADC min: 588, max: 590, diff 2, AVG: 588 mV: 678
835mV (measured) gives (CHOP_ON):ADC min: 706, max: 709, diff 3, AVG: 707 mV: 815
1066mV (measured) gives (CHOP_ON):ADC min: 903, max: 906, diff 3, AVG: 905 mV: 1043
I need to sample at max speed(about 50KHz),for example I get 1000 sample continuesly,I cann't use capasitor ,So,the offset of smaple to me is huge.
anyone can tell me how to solve it??