Difference between revisions of "GRB221009A src-ind analysis moon"
Line 85: | Line 85: | ||
<div><ul> | <div><ul> | ||
− | <li style="display: inline-block;"> [[File: | + | <li style="display: inline-block;"> [[File:image-cleaning20221010.jpeg|thumb|none|600px|Run-wise picture threshold distribution for runs taken in Oct. 10th.]] </li> |
− | <li style="display: inline-block;"> [[File: | + | <li style="display: inline-block;"> [[File:image-cleaning20221012.jpeg|thumb|none|600px|Run-wise picture threshold distribution for runs taken in Oct. 12th.]] </li> |
</ul></div> | </ul></div> | ||
Revision as of 11:31, 9 February 2023
Back to the Data analysis page
Go back to Transient Working Group.
Go Back to Gamma-Ray Bursts (GRBs).
Go back to GRB221009A.
- Analysis by A. Aguasca-Cabot (Universitat de Barcelona - arnau.aguasca@fqa.ub.edu)
Contents
General information
- We have two different analysis for GRB221009A. First, the analysis with strong moon conditions, which occurred during October 10th and 12th (see section LST Observations in GRB221009A). Second, the analysis with dark/mild moon conditions, which data was taken between October 15th to October 28th. However, notice that data taking in the position of GRB2210009A was extended up to November. Thus, these observations contains fantastic background (probably).
- The analysis is done using lstchain v0.9.9 or lstchain v0.9.10. The version release between them does not affect dramatically the offsite analysis.
Strong moon analysis
Since the observations were done in strong moon conditions, the HV gain was reduced to 70%/50% of the nominal value (see Elog for more information). By the time of the data taking was done and beginning of 2023, this analysis is not standard.
Problemes
Due to the no standard observation conditions, several aspects are affected. Are the following (probably more problems are missing): (Some of the things listed here come from an internal communication with Abelardo, thanks!)
- Pedestal pixels: several Pixels can be tagged as unusable using the standard calibration config. Thus will produce several pixels are not calibrated.
- F-factor: Using lower HV, probably the F-factor will change since for a signal in the PMT, there will be more fluctuations due to the moon conditions. Do we have the characterisation of the PMTs in those conditions? Meaning, we do not have the gain fluctuations curve for this conditions. Then, due to the wrong F-factor, we will get wrong conversion factors for data. This will act as an additional noise with a F-factor-style noise (gain fluctuations) that we should consider in the MC simulations.
- MC-real data match
- To match the correct gain response of the PMTs for simulation, the gain fluctuations curve for these conditions must be known. So, the simulations will not follow the real response if we do not have this curve. This will induce an addition noise with a "F-factor-style noise" (gain fluctuations) between MC and real data.
- The high noise conditions will produce that we have high fluctuations in the PMTs. As the the standard cleaning method is the tail cut cleaning with pedestal std condition, several pixels will have increased pixel picture/boundary threshold because of the high std of the pedestal pixels. Then, this will produce an inhomogeneous camera response with changing image cleaning values along the camera. On the other hand, the MC cleaning will not have this kind of response because the pedestal std conditions is not used in the MC data because this condition is intended for stars (which are not considered in the current MC). Then, there will be a mismatch between data and MC.
- NSB tuning: The standard analysis procedure matches the NSB tuning of the data (Moon, stars in the FoV, ...) at the DL1 stage. The problem of doing so is that the added NSB will not be as similar as the actual NSB added in the PMT waveform. Also, if we do not assess and correct the additional F-factor-style noise mentioned above, it will contribute as a noise between MC and data to match by the script, but due to the different behaviour, it will not be correctly filtered out using the current NSB tuning. (And maybe even the tuned MC with that NSB parameters will not be as good as the output if we would have only photon noise)
- How to face these problems?
- Pedestal pixels: I think Franca solved this issue increasing the nominal range of pedestal pixels to be tagged as usable. Not sure...
- F-factor: One could consider applying an empirical correction for such high-NSB data.
- MC-real data match (1): We need the gain fluctuations curve.
- MC-real data match (2): the levels of tail cut cleaning should be high enough such that only a handful of pixels get raised thresholds, and therefore data and MC are more or less similar.
- MC-real data match (NSB): use the NSB tuning at the waveform level, or make specific MC simulations matching directly the NSB of the runs.
General information
Based on the problems we face in this analysis. We following procedure is used (detailed information en each section):
- 1. Calibrations: First, Franca reprocessed the DL1 data (taken on Oct. 10th) to have a better calibration and more usable pixels.
- 2. Image cleaning: The runs at DL1 stage where analysed to find the best image cleaning for each run that satisfied that only about 4% of the pedestal pixels will raise their picture thresholds due to the pedestal std dev condition, and not more than 5% pedestals survive the cleaning. The former condition is the restrictive one in work. The std value for the image cleaning with pedestal std is set to 2.5 (standard one). Next, we apply the best cleaning to DL1 real data.
- 3. NSB tuning: We found the noise required to apply to the MC data in order to match the NSB conditions of the real data. Both the NSB tuning at the waveform level and at DL1 stage were used (using the tail cut cleaning values). However, for the former, the tuned MC gammas have a strange pixel charge distribution and the computation time is really high (between 1-2 days or 12h-1day if we consider lstchain PR 1066. Nevertheless, if we consider the latter code, all MC tuned files show the strange pixel charge distribution). We decided to proceed with the NSB tuning at DL1 stage
- 4. MC productions: based on the steps 2 and 3, we decided the produce different MC production to analyse the data.
Calibrations
You can see here the difference between the DL1 files produced with the standard calibration configuration (standard when the data was taken, v0.9) and the DL1 file recalibrated by Franca (specific paths can be found later) for a run taken in Oct. 10th.
Due to high number of unusable pixels in the standard files (v0.9), the resulting picture threshold is really high because the picture threshold set to the unusable pixels is the maximum pedestal threshold value of the pedestal pixel in the camera.
On the other hand, few unusable pixels are tagged in the standard files for runs recorded in Oct. 12th.
Image cleaning
You can see how the picture threshold value evolve for the different runs in Oct. 10th and 12th. The picture threshold increase as we increase the run_id for a given day because the moon was rising. Notice that the picture threshold for run 9613 is higher than the rest of the runs in Oct. 12th because the HV was reduced to 50%, not to 70% of the nominal value as in the rest of the runs.
NSB tuning
Here you can see the pixel charge distribution between real data and tuned MC data using NSB tuning at the waveform (right and wrong tuning?). The NSB tuning at the DL1 stage is added to the right plot.
It is clear that MC data and real data do not fully match in the pixel charge distribution.
MC productions
Based on the image cleaning and NSB tuning. We decided to produce the following MC productions.
- Date: 20221010
- MC production 1
- Runs to analyse (run_id): 9602-9604
- Tail cut cleaning (picture thd., boundary thd.): 30, 15
- NSB tuning:
"increase_nsb": true, "extra_noise_in_dim_pixels": 0.6, "extra_bias_in_dim_pixels": 0.269, "transition_charge": 8, "extra_noise_in_bright_pixels": 0.566,
- MC production 2
- Runs to analyse (run_id): 9605-9607
- Tail cut cleaning (picture thd., boundary thd.): 34, 17
- NSB tuning:
"increase_nsb": true, "extra_noise_in_dim_pixels": 0.6, "extra_bias_in_dim_pixels": 0.269, "transition_charge": 8, "extra_noise_in_bright_pixels": 0.566,
- Date: 20221012
- MC production 1
- Runs to analyse (run_id): 9613
- Tail cut cleaning (picture thd., boundary thd.): 30, 15
- NSB tuning:
"increase_nsb": true, "extra_noise_in_dim_pixels": 0.6, "extra_bias_in_dim_pixels": 0.269, "transition_charge": 8, "extra_noise_in_bright_pixels": 0.566,
- MC production 2
- Runs to analyse (run_id): 9614-9615
- Tail cut cleaning (picture thd., boundary thd.): 34, 17
- NSB tuning:
"increase_nsb": true, "extra_noise_in_dim_pixels": 0.6, "extra_bias_in_dim_pixels": 0.269, "transition_charge": 8, "extra_noise_in_bright_pixels": 0.566,
- MC production 3
- Runs to analyse (run_id): 9616-9617
- Tail cut cleaning (picture thd., boundary thd.): 34, 17
- NSB tuning:
"increase_nsb": true, "extra_noise_in_dim_pixels": 0.6, "extra_bias_in_dim_pixels": 0.269, "transition_charge": 8, "extra_noise_in_bright_pixels": 0.566,
Monte Carlo information
- Link to MC files used:
- Particle types:
- DEC band: Zenith Range
DL1 data
Please include your settings to produce your DL1a and DL1b files. This includes whether you use LSTOSA, specific versions of lstchain, dllab scripts, cleaning levels, and calibration information.
Example :
Reprocessed using DL1a files produced by LSTOSA (lstchain v0.7.3) and dl1ab script (v0.7.5)
- original DL1a files
/fefs/aswg/data/real/DL1/20210808/v0.7.3/tailcut84/dl1_LST-1.Run0XXXX.XXXX.h5 /fefs/aswg/data/mc/DL1/20200629_prod5_trans_80/XXX/zenith_20deg/south_pointing/20210416_v0.7.3_prod5_trans_80_local_taicut_8_4/
- lstchain v0.7.5
- real data: tailcut8-4 (with cleaning based on pedestal RMS), dynamic cleaning
/fefs/aswg/workspace/seiya.nozaki/data/BLLac/v0.7.3/tailcut84_dynamic_v075/DL1_raw/lstchain_standard_config.json
- MC: tailcut8-4 (with cleaning based on pedestal RMS), dynamic cleaning, PSF+NSB tuning
"image_modifier": { "increase_nsb": true, "extra_noise_in_dim_pixels": 0.74, "extra_bias_in_dim_pixels": 0.38, "transition_charge": 8, "extra_noise_in_bright_pixels": 0.052, "increase_psf": true, "smeared_light_fraction": 0.125 },
/fefs/aswg/workspace/seiya.nozaki/data/MC/v0.7.5/tailcut84_dynamic_bllac/DL1_raw/lstchain_dl1ab_tune_MC_to_BLLac_config.json
- Produced DL1b files
/fefs/aswg/workspace/seiya.nozaki/data/MC/v0.7.5/tailcut84_dynamic_bllac/DL1_raw/data/ /fefs/aswg/workspace/seiya.nozaki/data/BLLac/v0.7.3/tailcut84_dynamic_v075/DL1_raw/Run0XXXX
Random forest
Please include your specific .json files used in producing your RFs and for which source analysis (source dependent or independent), and variabules used in the RF.
Example
- lstchain-0.7.6.dev242+g2086cb9
- source-indep (diffuse gamma, src_r<3deg, disp_norm)
/fefs/aswg/workspace/seiya.nozaki/data/MC/v0.7.5/tailcut84_dynamic_bllac/srcindep/RF/lstchain_standard_config.json
- source-dep
/fefs/aswg/workspace/seiya.nozaki/data/MC/v0.7.5/tailcut84_dynamic_bllac/srcdep/RF/lstchain_src_dep_config.json
DL2 data
Information about your DL2 data and settings such as: specifc versions of lstchain, specfic .json version used.
Example:
- lstchain-0.7.6.dev242+g2086cb9
- source-indep
/fefs/aswg/workspace/seiya.nozaki/data/MC/v0.7.5/tailcut84_dynamic_bllac/srcindep/DL2/data/ /fefs/aswg/workspace/seiya.nozaki/data/BLLac/v0.7.3/tailcut84_dynamic_v075/srcindep/DL2/data/dl2_LST-1.Run0XXXX_merged.h5
- source-dep
/fefs/aswg/workspace/seiya.nozaki/data/MC/v0.7.5/tailcut84_dynamic_bllac/srcdep/DL2/data/ /fefs/aswg/workspace/seiya.nozaki/data/BLLac/v0.7.3/tailcut84_dynamic_v075/srcdep/DL2/data/dl2_LST-1.Run0XXXX_merged.h5
DL3 data selection
Information about your DL3 data selection.
Example
- intensity > 50
- r: [0, 1 ]
- wl: [0.1, 1 ]
- leakage_intensity_width_2: [0, 0.2 ]
- source-indep
- fixed_gh_cut: 0.3
- fixed_theta_cut: 0.2
- source-dep
- fixed_gh_cut: 0.7
- fixed_alpha_cut: 10
High-level analysis
Please put any information about the production of higher level analysis here.
Example
- lstchain to generate source-dep IRF, DL3
- Science Tool: gammapy 0.18.2
- point-like IRF, 1D analysis
Analysis Results
Please place higher-level analysis results (Spectra, SkyMaps, Lightcurves, etc) here.