Wednesday, May 22, 2013

Procedures

Preparing telescope systems to take dome flats



Prepare everything to move telescope and dome

  1. Turn HBS on
  2. Raise M1
  3. Put SCS in follow mode
  4. Cass rotator on
  5. Locking pins out
  6. Dome in computer mode
  7. Init dome
  8. Set the Deployable Baffles to RETRACTED and the periscope to OPEN
  9. Be sure that the mirror cover and instrument cover are open
  10. Move telescope, CRCS and dome to:
  11. Telescope at Az= 265deg, Elev=+45 
  12. Rotator= 180deg
  13. Dome Az at 185deg.
  14. Disassert the Az/El and CassRot drives - this breaks the telescope in place

Once the flats are done

  1. Take the telescope back to Zenith
  2. Turn HBS off
  3. Park M1
  4. Remove SCS from follow mode
  5. Locking pins in
  6. Close mirror and instrument covers
  7. Leave Dome control in off
  8. CRCS off



Saturday, May 11, 2013

Troubleshooting TIPS

VMEs NEW
VME systems consoles.
To open VME consoles you can open in any of the Control room screens a terminal window and type console "system name" (example: console PCS, console CRCS)

Systems that don't start automatically.
To get them to start, open the system console by typing
console "system name"
once you have the system prompt type
<startup

CRCS NEW
How to solve a limit condition in Casswrap (by Rolando Rogers).


The procedure to recover from a CRCS wrap limit fault is the following:

1.       Go to 4th floor to visually inspect the cass rotator cable chains and make sure nothing is broken.
2.       Dissasert the CRCS in the dm screen (THIS IS VERY IMPORTANT to do, otherwise you will continue getting the fault as soon as you reset GIS)
3.       While in the 4th floor, reset GIS faults by pushing F1 in AB display
4.       Then issue a CENTERWRAP command form the dm screens. PLEASE NOTE that you DON’T need to assert the drives, the centerwarp command will do it for you.
5.       Wait till the command is over and go to IDLE  state after 2 or 3 minutes.
6.       Reset GIS AB display again.
7.       You are ready to go and you don’t need to assert the drives again as the centerwrap command did it for you.

Young model 81000 Ultrasonic Wind Sensor setup NEW
How to setup ultrasonic wind sensors
Link to setup procedure in DMT



Friday, April 26, 2013

Transformer oil leak contingency plan, April 26, 2013



Hola a todos

Diego has informed that the main transformer is leaking oil, meaning that it could fail anytime.
NOAO S is contacting the company that made the transformer  to request the presence of one of their technicians on Pachon ASAP to evaluate the severity of the situation. At the moment we don't know when would be the earliest this person would arrive on site.
In parallel FINNING is finishing up the maintenance of our generator, expecting to be ready to fire it up around 3 PM today.
Diego anticipating major problems has already contacted the company we are renting the second generator from, so they allow us to keep it for few more days until the emergency is over.
Today we had a meeting to elaborate a contingency plan in case for whatever reason we have a power loss.

The contingency plan covers the following scenarios.

We lose power between now and the time our generator is back in business.
Actions
We switch to back up generator
We re organize our loads to assure power to instruments and computer room
No operations at night
A crew of at least 2 persons stays at night to look after the site and the generator itself including refilling fuel.
TTM is contacted, IS is contacted, E&IG is contacted, Site is contacted, QC is contacted, Head of Science operations is contacted, SOS manager is contacted.

We lose power tonight, our generator already repaired and tested for few hours kicks in.
Actions
We will switch automatically to our generator as usual.
A crew of at least 2 persons will come up to look after the site and the generator, getting ready for an emergency transfer in the event our generator fails.
We operate at night with the restrictions of being on generator power.
TTM is contacted, IS is contacted, E&IG is contacted, Site is contacted, QC is contacted, Head of Science operations is contacted, SOS manager is contacted.

We lose power tonight, our generator already repaired and tested for few hours kicks in, but it fails after few minutes or hours.
Actions
We will switch automatically to our generator as usual.
A crew of at least 2 persons will come up to look after the site and the generator, getting ready for an emergency transfer in the event our generator fails.
We operate at night with the restrictions of being on generator power.
IS is contacted, E&IG is contacted, Site is contacted, QC is contacted, Head of Science operations is contacted, SOS manager is contacted.
If our generator fails we disconnect it and re connect the backup.
The site stays on backup generator
We re organize our loads to assure power to instruments and computer room
We cancel operations at night
The crew stays at night to look after the site and the generator itself including refilling fuel.
TTM is contacted, IS is contacted, E&IG is contacted, Site is contacted, QC is contacted, Head of Science operations is contacted, SOS manager is contacted.

We lose power over the weekend during the day and our generator kicks in
Actions
We will switch automatically to our generator as usual.
A crew of at least 2 persons will come up to look after the site and the generator, getting ready for an emergency transfer in the event our generator fails.
Another crew will have to come up to take care of the night shift looking after the generator
TTM is contacted, IS is contacted, E&IG is contacted, Site is contacted, QC is contacted, Head of Science operations is contacted, SOS manager is contacted.

We lose power over the weekend during the day and our generator kicks in, but fails
Actions
We will switch automatically to our generator as usual.
A crew of at least 2 persons will come up to look after the site and the generator, getting ready for an emergency transfer in the event our generator fails.
Another crew will have to come up to take care of the night shift looking after the generator
IS is contacted, E&IG is contacted, Site is contacted, QC is contacted, Head of Science operations is contacted, SOS manager is contacted.
If our generator fails we disconnect it and re connect the backup.
The site stays on backup generator
We re organize our loads to assure power to instruments and computer room
We cancel operations at night
The emergency crewslook after the site and the generator itself including refilling fuel.
TTM is contacted, IS is contacted, E&IG is contacted, Site is contacted, QC is contacted, Head of Science operations is contacted, SOS manager is contacted.

Other scenarios
Nothing happens over the weekend, the transformer still news to be serviced. This will be done during the day. For that particular night we will evaluate if it is recommended to leave a crew behind at night.
Nothing happens tonight, our generator will be tested tomorrow during the day for several hours using the dummy load.
Nothing happens and the generator maintenance is finished around 3 PM, we will test the generator tonight and tomorrow. Tonight we will test it until 10 PM.
Nothing happens tonight but the oil level on the transformer is dangerously low, we will evaluate if it is advisable to transfer the load to our generator after it has been tested tomorrow, plus turning transformer off.

Main Roles already defined.
Diego is our single point of contact with NOAO S
Gustavo will make sure the whole plan is properly coordinated and resourced
Rolando looks after instruments, technical advisor
Paul supports NOAO S and our electrician, including night shifts
Alejandro, Pedro and Hector, E&IG team members covering after hours needs and night shifts.
Note: This is a preliminary list, other roles will be defined later and resources will be identified.

Other support needed
John Michael, we need vehicles available for the emergency with the fuel tank full.

Saludos

Gustavo

Pictures taken today at 2:30 PM




Pictures taken today Monday 29, everything looks normal now




Thursday, March 28, 2013

Sunday, February 24, 2013

TCS crash events

GS TCS crash troubleshooting spreadsheet

December 2012 event
Below you will find the email thread of the 2012 TCS crash event.
Hola Arturo,

FYI.

The following email thread shows the type of problems we experienced
with the s/w versions. We had the very same behavior but with different
symptoms on the Feb run. On both the TCS version was the one
loaded/unloaded/loaded...after a huge amount on h/w bds.
swapping/config.changes/etc....

Saludos,

Ramón.

===========================================================================================================

From: Ramon Galvez
Sent: Wednesday, December 05, 2012 12:42 PM
To: Maxime Boccas
Cc: Gustavo Arriagada; Rolando Rogers; Andrew Serio; Cristian Urrutia;
Benoit Neichel; Rodrigo Carrasco; Javier Luhrs; Pablo Diaz; Gaston
Gausachs; Fabrice Vidal
Subject: Re: NGSWFS Status

Hola Maxime,

We did the swap of version on Monday and we had no issues so far with the new one. We have no clear understanding on what may have caused the problem we had this run. Since the only change we had between the previous run ( Oct 30 --> Nov 03 ) was the TCS s/w version we installed it and since that time we had no issues but as aforementioned going again to the new one ( Monday morning ) has not give problems either though it was tested w/o offlads and w/o closing loops so more test can be done in those conditions.

On the h/w side we have a complete pwr. & grndg. prev. maintenance scheduled for this subsystem to be carried out tomorrow morning.

It is very important to say that the NGSWFS probes have been used extensively in all the runs we have had since 2010 with the highly demanding Probe Mapping - I have seen demands of ~10nm ! - and all that time it has been successful. I have talked to Benoit about this and we think that the demand rate is unnecessarily high. We do have some minor timeouts - that are easily solved on the fly - that we are working on.

The weird behavior we saw on Friday and on Saturday was completely different, i.e. the OMS58 controller (8 axis controll.) was completely lost and this was highly intermittent making it very difficult to diagnose. We have had no more problems since Sunday and a lot of Probe Mapping was successfully done that night. Maybe Benoit wants to comment on this.

Saludos,

Ramón.


On 05-12-2012 12:08, Maxime Boccas wrote:

> I was a bit behind the news with my last email asking about this.... It seems confirmed there was a SW version issue then? Shall we review our change control protocol?
> Thanks
> M
>
> -----Original Message-----
> From: Gustavo Arriagada
> Sent: Monday, December 03, 2012 11:09 AM
> To: Ramon Galvez; Rolando Rogers
> Cc: Cristian Urrutia; Benoit Neichel; Rodrigo Carrasco; Javier Luhrs;
> Pablo Diaz; Maxime Boccas; Gaston Gausachs; Fabrice Vidal
> Subject: RE: NGSWFS Status
>
> These are very encouraging news, good job everyone!!!, hopefully today's tests will confirm your suspicions.
>
> Saludos
>
> Gustavo
>
>
> ________________________________________
> From: Ramon Galvez
> Sent: Monday, December 03, 2012 1:24 AM
> To: Gustavo Arriagada; Rolando Rogers
> Cc: Cristian Urrutia; Benoit Neichel; Rodrigo Carrasco; Javier Luhrs;
> Pablo Diaz
> Subject: NGSWFS Status
>
> HOla Rolando, Gustavo y Todos,
>
> Today we had to come up to work on the NGSWFS issues that were reported by Benoit. As I've mentioned to some of you I saw similar problems during h/w & s/w development, but that was about 4 years ago. From that time this is the 1st occasion that that type of highly intermittent issues showed up. Mainly while in Probe Tracking Mode. All this started on this particular run, nothing like this was seen on the run we had on Oct 30 --> Nov 03 or in any run since we started with the Commissioning Runs back in 2010.
>
> As reported, we have had the following issues :
>
> 1. Sudden stalled mechanism, i.e friday night  P2Y stalled at -1.4[mm] while going from 0.0 --> 14[mm] .
> 2. Unable to do a very basic Index on some probes.
> 3. Spiral routine getting stalled by a Hardware Sensor Limit read by the s/w (not a hard Hardware Pwr. Shutdown Limit). Today we had P2Y reaching one of those limits with no explanation and we verified with Cristian that it was really reaching it physically. This was after having a demand from the TCS due to a CWFS2 following selection.
>
> To rule out the s/w TCS version in use I asked to have the version we used on the aforementioned previous run. Once Rodrigo was done with his GSAOI tasks it was remotely installed by Javier @ ~11pm. Then - the night time crew Benoit, Rodrigo, Drew  - started to use the Tracking Mode for real science Probe Mapping on 3 different targets. No issues have showed up; this implies continued use of all 3 Probe Tracking for 2 hrs and it is actually on-going.
>
> We - Ramón and Cristián - want to do more similar tests during daytime so I entered an urgent TR for tomorrow to continue the tests prior to change to the new TCS version for further tests from the SOS Group.
>
> Saludos,
>
> Ramón, Cristián
>
Saludos,

Ramón.

--
Ramon L. Galvez
Senior Electronics Engineer
Gemini Observatory
www.gemini.edu
Phone   : +56 51 205678
Recept. : +56 51 205600
Fax     : +56 51 205655



February 2013 event


The team:
Ramon Galvez, Vanessa Montes, John White, Pedro Gigoux, Roberto Rojas, Cristian Urrutia, Chris Morrison, Cristian Silva, Jose Varas, Benoit Neichel, Ariel Lopez, Pedro Ojeda, Herman Diaz

May 12-13 event

The team:
Ramon Galvez, Roberto Rojas, Cristian Urrutia, Chris Morrison, Cristian Silva, Jose Varas, Arturo Nunez, William Rambold, Javier Luhrs, Gustavo Arriagada


SW and IS tools compilation document
In the link above you will find all the tools SW G and IS G have identified to be used during this troubleshooting process.

May 12-13 TCS crash troubleshooting plans
In the link above you will find the troubleshooting plans that each group is proposing.

GS M2 shutdown

Below there is a collection of links that will take the people working on GS M2 shutdown to the different documents we will use in preparation for it and during the shutdown itself.

GS M2 shutdown preparations

GS M2 shutdown plan V4   NEW
GS M2 shutdown updates  


Procedures


Notes:

I have added one more tab M1 in situ wash preps and also a couple of more tasks to the M2 removal installation list (Gustavo Feb, 25 2013)

I have added the link to the procedure for the M2 wash and the shutdown plan. I have also updated the shutdown resources list (Gustavo March 22, 2013)

I have added the link to M1 in situ wash procedure (Gustavo, March 22, 2013)

I have updated M2 shutdown plan (Gustavo, March 31, 2013)

M1 in situ wash cannot be done until we fix the problem with the transformer (Gustavo, March 31, 2013)

For future reference I have added GSAOI installation procedure updated by Gaston in Nov 2012 (Gustavo, March 31, 2013)


Tentative list of people working on this shutdown:

Rolando Rogers, Gabriel Perez, Hector Swett, Diego Maltes, Roberto Rojas, Pablo Diaz, Laridan Jeria, Claudio Araya, Cristian Moreno, Alejandro Gutierrez, Gustavo Arriagada, Gustavo Alarcon, Tomislav Vucina, Hector Figueroa, Ramon Galvez, Vanessa Montes, Pedro Ojeda, Sandra Romero