<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">OK, after looking at your configs, I noticed that I was missing a "Gres=gpu" entry on my Nodename definition. Added and distributed...<div>NodeName=tiger11 NodeAddr=X.X.X.X Sockets=2 CoresPerSocket=12 ThreadsPerCore=2 Gres=gpu:1080gtx:0,gpu:k20:1 Feature=HyperThread State=UNKNOWN<br></div><div><br></div><div>Assuming that 0 and 1 refer to device address as shown using nvidia-smi and it is not the number of GPUs in server... I have a multi GPU server with 8 GTXs, so I want to make sure I understand this correctly.</div><div>scontrol shows...</div><div><div>root@panther02 x86_64# scontrol show node=tiger11</div><div>NodeName=tiger11 Arch=x86_64 CoresPerSocket=12</div><div> CPUAlloc=0 CPUTot=48 CPULoad=19.96</div><div> AvailableFeatures=HyperThread</div><div> ActiveFeatures=HyperThread</div><div> Gres=gpu:1080gtx:0,gpu:k20:1</div><div> NodeAddr=X.X.X.X NodeHostName=tiger11 Version=18.08</div><div> OS=Linux 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015</div><div> RealMemory=1 AllocMem=0 FreeMem=268460 Sockets=2 Boards=1</div><div> State=IDLE+DRAIN ThreadsPerCore=2 TmpDisk=0 Weight=1 Owner=N/A MCS_label=N/A</div><div> Partitions=tiger_1,compute_1</div><div> BootTime=2018-04-02T13:30:12 SlurmdStartTime=2018-12-05T10:44:58</div><div> CfgTRES=cpu=48,mem=1M,billing=48,gres/gpu=2,gres/gpu:1080gtx=1,gres/gpu:k20=1</div><div> AllocTRES=</div><div> CapWatts=n/a</div><div> CurrentWatts=0 LowestJoules=0 ConsumedJoules=0</div><div> ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s</div><div> Reason=gres/gpu:1080gtx count too low (0 < 1) [slurm@2018-12-05T10:36:28]</div></div><div><br></div><div>What does the last line mean? nvidia-smi shows no jobs running on the 1080gtx...</div><div><br></div><div><br></div><div> Also changed the gres.conf file to reflect my nodes that have 2 different types of GPUs</div><div><div>NodeName=tiger[02-04,06-09,11-14,16-19,21-22] Name=gpu Type=1080gtx File=/dev/nvidia0 Cores=0</div><div>NodeName=tiger[02-04,06-09,11-14,16-19,21-22] Name=gpu Type=k20 File=/dev/nvidia1 Cores=1</div></div><div><br></div><div>This has allowed me to submit a GPU test job on tiger11. It is failing due to tensorflow environment parameters, but that is easy to fix...</div><div><br></div><div>Another question, I see Tina has a number of available features listed for each node (cpu_gen, sku, cpu_mem, etc)... Is that necessary or is that just a sanity check? </div><div><br></div><div>Once again, I like to thank all contributors to this thread... It has helped me get my cluster going!</div><div><br></div><div>Thanks.</div><div>Lou</div><div><br></div><div><br></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Dec 5, 2018 at 9:41 AM Tina Friedrich <<a href="mailto:tina.friedrich@it.ox.ac.uk">tina.friedrich@it.ox.ac.uk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
don't mind sharing the config at all. Not sure it helps though, it's <br>
pretty basic.<br>
<br>
Picking an example node, I have<br>
<br>
[ ~]$ scontrol show node arcus-htc-gpu011<br>
NodeName=arcus-htc-gpu011 Arch=x86_64 CoresPerSocket=8<br>
CPUAlloc=16 CPUTot=16 CPULoad=20.43<br>
<br>
AvailableFeatures=cpu_gen:Haswell,cpu_sku:E5-2640v3,cpu_frq:2.60GHz,cpu_mem:64GB,gpu,gpu_mem:12GB,gpu_gen:Kepler,gpu_sku:K40,gpu_cc:3.5,<br>
<br>
ActiveFeatures=cpu_gen:Haswell,cpu_sku:E5-2640v3,cpu_frq:2.60GHz,cpu_mem:64GB,gpu,gpu_mem:12GB,gpu_gen:Kepler,gpu_sku:K40,gpu_cc:3.5,<br>
Gres=gpu:k40m:2<br>
NodeAddr=arcus-htc-gpu011 NodeHostName=arcus-htc-gpu011<br>
OS=Linux 3.10.0-862.14.4.el7.x86_64 #1 SMP Wed Sep 26 15:12:11 UTC 2018<br>
RealMemory=63000 AllocMem=0 FreeMem=56295 Sockets=2 Boards=1<br>
State=ALLOCATED ThreadsPerCore=1 TmpDisk=0 Weight=96 Owner=N/A <br>
MCS_label=N/A<br>
Partitions=htc<br>
BootTime=2018-11-28T15:12:29 SlurmdStartTime=2018-11-28T17:58:55<br>
CfgTRES=cpu=16,mem=63000M,billing=16<br>
AllocTRES=cpu=16<br>
CapWatts=n/a<br>
CurrentWatts=0 LowestJoules=0 ConsumedJoules=0<br>
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s<br>
<br>
<br>
gres.conf on arcus-htc-gpu011 is<br>
<br>
[ ~]$ cat /etc/slurm/gres.conf<br>
Name=gpu Type=k40m File=/dev/nvidia0<br>
Name=gpu Type=k40m File=/dev/nvidia1<br>
<br>
Relevant bits of slurm.conf are, I believe<br>
<br>
GresTypes=hbm,gpu<br>
(DebugFlags=Priority,Backfill,NodeFeatures,Gres,Protocol,TraceJobs)<br>
<br>
NodeName=arcus-htc-gpu009,arcus-htc-gpu[011-018] Weight=96 Sockets=2 <br>
CoresPerSocket=8 ThreadsPerCore=1 RealMemory=63000 Gres=gpu:k40m:2 <br>
Feature=cpu_gen:Haswell,cpu_sku:E5-2640v3,cpu_frq:2.60GHz,cpu_mem:64GB,gpu,gpu_mem:12GB,gpu_gen:Kepler,gpu_sku:K40,gpu_cc:3.5,<br>
<br>
Don't think I did anything else.<br>
<br>
I have other types of nodes - couple of P100s, couple of V100s, couple <br>
of K80s and one or two odd things (M40, P4).<br>
<br>
Used to run with a gres.conf that simply had 'Name=gpu <br>
File=/dev/nvidia[0-2]' (or [0-4], depending) and that also worked; I <br>
introduced the type when I gained a node that has two different nvidia <br>
cards, so what was on what port became important, not because the <br>
'range' configuration caused problems.<br>
<br>
This wasn't a fresh install of 18.x - it was a 17.x installation that I <br>
upgraded to 18.x. Not sure if that makes a difference. I made no changes <br>
to anything (slurm.conf, gres.conf) with the update though. I just <br>
installed the new rpms.<br>
<br>
Tina<br>
<br>
On 05/12/2018 13:20, Lou Nicotra wrote:<br>
> Tina, thanks for confirming that GPU GRES resources work with 18.08... I <br>
> might just upgrade to 18.08.03 as I am running 18.08.0<br>
> <br>
> The nvidia devices exists on all servers and persistence is set. They <br>
> have been in there for a number of years and our users make use of them <br>
> daily. I can actually see that slurmd knows about them while restarting <br>
> the daemon:<br>
> [2018-12-05T08:03:35.989] Slurmd shutdown completing<br>
> [2018-12-05T08:03:36.015] Message aggregation disabled<br>
> [2018-12-05T08:03:36.016] gpu device number 0(/dev/nvidia0):c 195:0 rwm<br>
> [2018-12-05T08:03:36.017] gpu device number 1(/dev/nvidia1):c 195:1 rwm<br>
> [2018-12-05T08:03:36.059] slurmd version 18.08.0 started<br>
> [2018-12-05T08:03:36.059] slurmd started on Wed, 05 Dec 2018 08:03:36 -0500<br>
> [2018-12-05T08:03:36.059] CPUs=48 Boards=1 Sockets=2 Cores=12 Threads=2 <br>
> Memory=386757 TmpDisk=4758 Uptime=21324804 CPUSpecList=(null) <br>
> FeaturesAvail=(null) FeaturesActive=(null)<br>
> <br>
> Would you mind sharing the portions of the slurm.conf and corresponding <br>
> GRES definitions that you are using?. You have individual GRES files for <br>
> each server based on GPU type? I tried both, none of them work.<br>
> <br>
> My slurm.conf file has entries for GPUs as follows:<br>
> GresTypes=gpu<br>
> #AccountingStorageTRES=gres/gpu,gres/gpu:k20,gres/gpu:1080gtx <br>
> (currently commented out)<br>
> <br>
> gres.conf is as follows (had tried different configs, no change with <br>
> either one...)<br>
> # GPU Definitions<br>
> NodeName=tiger[01,05,10,15,20] Name=gpu Type=1080gtx File=/dev/nvidia0 <br>
> Cores=0<br>
> NodeName=tiger[01,05,10,15,20] Name=gpu Type=1080gtx File=/dev/nvidia1 <br>
> Cores=1<br>
> #NodeName=tiger[01,05,10,15,20] Name=gpu Type=1080gtx <br>
> File=/dev/nvidia[0-1] Cores=0,1<br>
> <br>
> NodeName=tiger[02-04,06-09,11-14,16-19,21-22] Name=gpu Type=k20 <br>
> File=/dev/nvidia0 Cores=0<br>
> NodeName=tiger[02-04,06-09,11-14,16-19,21-22] Name=gpu Type=k20 <br>
> File=/dev/nvidia1 Cores=1<br>
> #NodeName=tiger[02-04,06-09,11-14,16-19,21-22] Name=gpu Type=k20 <br>
> File=/dev/nvidia[0-1] Cores=0,1<br>
> <br>
> What am I missing?<br>
> <br>
> Thanks...<br>
> <br>
> <br>
> <br>
> <br>
> On Wed, Dec 5, 2018 at 4:59 AM Tina Friedrich <br>
> <<a href="mailto:tina.friedrich@it.ox.ac.uk" target="_blank">tina.friedrich@it.ox.ac.uk</a> <mailto:<a href="mailto:tina.friedrich@it.ox.ac.uk" target="_blank">tina.friedrich@it.ox.ac.uk</a>>> wrote:<br>
> <br>
> I'm running 18.08.3, and I have a fair number of GPU GRES resources -<br>
> recently upgraded to 18.08.03 from a 17.x release. It's definitely not<br>
> as if they don't work in an 18.x release. (I do not distribute the same<br>
> gres.conf file everywhere though, never tried that.)<br>
> <br>
> Just a really stupid question - the /dev/nvidiaX devices do exist, I<br>
> assume? You are running nvidia-persistenced (or something similar) to<br>
> ensure the cards are up & the device files initialised etc?<br>
> <br>
> Tina<br>
> <br>
> On 04/12/2018 23:36, Brian W. Johanson wrote:<br>
> > Only thing to suggest once again is increasing the logging of both<br>
> > slurmctl and slurmd.<br>
> > As for downgrading, I wouldn't suggest running a 17.x slurmdbd<br>
> against a<br>
> > db built with 18.x. I imagine there are enough changes there to<br>
> cause<br>
> > trouble.<br>
> > I don't imagine downgrading will fix your issue, if you are running<br>
> > 18.08.0, the most recent release is 18.08.3. NEWS packed in the<br>
> > tarballs gives the fixes in the versions. I don't see any that<br>
> would<br>
> > fit you case.<br>
> ><br>
> ><br>
> > On 12/04/2018 02:11 PM, Lou Nicotra wrote:<br>
> >> Brian, I used a single gres.conf file and distributed to all<br>
> nodes...<br>
> >> Restarted all daemons, unfortunately scontrol still does not<br>
> show any<br>
> >> Gres resources for GPU nodes...<br>
> >><br>
> >> Will try to roll back to 17.X release. Is it basically a matter of<br>
> >> removing 18.x rpms and installing 17's? Does the DB need to be<br>
> >> downgraded also?<br>
> >><br>
> >> Thanks...<br>
> >> Lou<br>
> >><br>
> >> On Tue, Dec 4, 2018 at 10:25 AM Brian W. Johanson<br>
> <<a href="mailto:bjohanso@psc.edu" target="_blank">bjohanso@psc.edu</a> <mailto:<a href="mailto:bjohanso@psc.edu" target="_blank">bjohanso@psc.edu</a>><br>
> >> <mailto:<a href="mailto:bjohanso@psc.edu" target="_blank">bjohanso@psc.edu</a> <mailto:<a href="mailto:bjohanso@psc.edu" target="_blank">bjohanso@psc.edu</a>>>> wrote:<br>
> >><br>
> >><br>
> >> Do one more pass through making sure<br>
> >> s/1080GTX/1080gtx and s/K20/k20<br>
> >><br>
> >> shutdown all slurmd, slurmctld, start slurmctl, start slurmd<br>
> >><br>
> >><br>
> >> I find it less confusing to have a global gres.conf file. I<br>
> >> haven't used a list (nvidia[0-1), mainly because I want to<br>
> specify<br>
> >> thethe cores to use for each gpu.<br>
> >><br>
> >> gres.conf would look something like...<br>
> >><br>
> >> NodeName=tiger[02-04,06-09,11-14,16-19,21-22] Name=gpu Type=k80<br>
> >> File=/dev/nvidia0 Cores=0<br>
> >> NodeName=tiger[02-04,06-09,11-14,16-19,21-22] Name=gpu Type=k80<br>
> >> File=/dev/nvidia1 Cores=1<br>
> >> NodeName=tiger[01,05,10,15,20] Name=gpu Type=1080gtx<br>
> >> File=/dev/nvidia0 Cores=0<br>
> >> NodeName=tiger[01,05,10,15,20] Name=gpu Type=1080gtx<br>
> >> File=/dev/nvidia1 Cores=1<br>
> >><br>
> >> which can be distributed to all nodes.<br>
> >><br>
> >> -b<br>
> >><br>
> >><br>
> >> On 12/04/2018 09:55 AM, Lou Nicotra wrote:<br>
> >>> Brian, the specific node does not show any gres...<br>
> >>> root@panther02 slurm# scontrol show partition=tiger_1<br>
> >>> PartitionName=tiger_1<br>
> >>> AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL<br>
> >>> AllocNodes=ALL Default=YES QoS=N/A<br>
> >>> DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO<br>
> >>> GraceTime=0 Hidden=NO<br>
> >>> MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=0 LLN=NO<br>
> >>> MaxCPUsPerNode=UNLIMITED<br>
> >>> Nodes=tiger[01-22]<br>
> >>> PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO<br>
> >>> OverSubscribe=NO<br>
> >>> OverTimeLimit=NONE PreemptMode=OFF<br>
> >>> State=UP TotalCPUs=1056 TotalNodes=22<br>
> SelectTypeParameters=NONE<br>
> >>> JobDefaults=(null)<br>
> >>> DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED<br>
> >>><br>
> >>> root@panther02 slurm# scontrol show node=tiger11<br>
> >>> NodeName=tiger11 Arch=x86_64 CoresPerSocket=12<br>
> >>> CPUAlloc=0 CPUTot=48 CPULoad=11.50<br>
> >>> AvailableFeatures=HyperThread<br>
> >>> ActiveFeatures=HyperThread<br>
> >>> Gres=(null)<br>
> >>> NodeAddr=X.X.X.X NodeHostName=tiger11 Version=18.08<br>
> >>> OS=Linux 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19<br>
> 22:10:57 UTC 2015<br>
> >>> RealMemory=1 AllocMem=0 FreeMem=269695 Sockets=2 Boards=1<br>
> >>> State=IDLE ThreadsPerCore=2 TmpDisk=0 Weight=1 Owner=N/A<br>
> >>> MCS_label=N/A<br>
> >>> Partitions=tiger_1,compute_1<br>
> >>> BootTime=2018-04-02T13:30:12<br>
> SlurmdStartTime=2018-12-03T16:13:22<br>
> >>> CfgTRES=cpu=48,mem=1M,billing=48<br>
> >>> AllocTRES=<br>
> >>> CapWatts=n/a<br>
> >>> CurrentWatts=0 LowestJoules=0 ConsumedJoules=0<br>
> >>> ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s<br>
> >>><br>
> >>> So, something is not setup correctly... Could it be a 18.X bug?<br>
> >>><br>
> >>> Thanks.<br>
> >>><br>
> >>><br>
> >>> On Tue, Dec 4, 2018 at 9:31 AM Lou Nicotra<br>
> >>> <<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>>>> wrote:<br>
> >>><br>
> >>> Thanks Michael. I will try 17.x as I also could not see<br>
> >>> anything wrong with my settings... Will report back<br>
> >>> afterwards...<br>
> >>><br>
> >>> Lou<br>
> >>><br>
> >>> On Tue, Dec 4, 2018 at 9:11 AM Michael Di Domenico<br>
> >>> <<a href="mailto:mdidomenico4@gmail.com" target="_blank">mdidomenico4@gmail.com</a> <mailto:<a href="mailto:mdidomenico4@gmail.com" target="_blank">mdidomenico4@gmail.com</a>><br>
> <mailto:<a href="mailto:mdidomenico4@gmail.com" target="_blank">mdidomenico4@gmail.com</a> <mailto:<a href="mailto:mdidomenico4@gmail.com" target="_blank">mdidomenico4@gmail.com</a>>>> wrote:<br>
> >>><br>
> >>> unfortunately, someone smarter then me will have to<br>
> help<br>
> >>> further. I'm<br>
> >>> not sure i see anything specifically wrong. The one<br>
> >>> thing i might try<br>
> >>> is backing the software down to a 17.x release<br>
> series. I<br>
> >>> recently<br>
> >>> tried 18.x and had some issues. I can't say whether<br>
> >>> it'll be any<br>
> >>> different, but you might be exposing an undiagnosed bug<br>
> >>> in the 18.x<br>
> >>> branch<br>
> >>> On Mon, Dec 3, 2018 at 4:17 PM Lou Nicotra<br>
> >>> <<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>><br>
> >>> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>>>> wrote:<br>
> >>> ><br>
> >>> > Made the change in the gres.conf on local server file<br>
> >>> and restarted slurmd and slurmctld on master....<br>
> >>> Unfortunately same error...<br>
> >>> ><br>
> >>> > Distributed corrected gres.conf to all k20 servers,<br>
> >>> restarted slurmd and slurmdctl... Still has same<br>
> error...<br>
> >>> ><br>
> >>> > On Mon, Dec 3, 2018 at 4:04 PM Brian W. Johanson<br>
> >>> <<a href="mailto:bjohanso@psc.edu" target="_blank">bjohanso@psc.edu</a> <mailto:<a href="mailto:bjohanso@psc.edu" target="_blank">bjohanso@psc.edu</a>><br>
> <mailto:<a href="mailto:bjohanso@psc.edu" target="_blank">bjohanso@psc.edu</a> <mailto:<a href="mailto:bjohanso@psc.edu" target="_blank">bjohanso@psc.edu</a>>>> wrote:<br>
> >>> >><br>
> >>> >> Is that a lowercase k in k20 specified in the batch<br>
> >>> script and nodename and a uppercase K specified in<br>
> gres.conf?<br>
> >>> >><br>
> >>> >> On 12/03/2018 09:13 AM, Lou Nicotra wrote:<br>
> >>> >><br>
> >>> >> Hi All, I have recently set up a slurm cluster<br>
> with my<br>
> >>> servers and I'm running into an issue while submitting<br>
> >>> GPU jobs. It has something to to with gres<br>
> >>> configurations, but I just can't seem to figure out<br>
> what<br>
> >>> is wrong. Non GPU jobs run fine.<br>
> >>> >><br>
> >>> >> The error is as follows:<br>
> >>> >> sbatch: error: Batch job submission failed: Invalid<br>
> >>> Trackable RESource (TRES) specification after<br>
> submitting<br>
> >>> a batch job.<br>
> >>> >><br>
> >>> >> My batch job is as follows:<br>
> >>> >> #!/bin/bash<br>
> >>> >> #SBATCH --partition=tiger_1 # partition name<br>
> >>> >> #SBATCH --gres=gpu:k20:1<br>
> >>> >> #SBATCH --gres-flags=enforce-binding<br>
> >>> >> #SBATCH --time=0:20:00 # wall clock limit<br>
> >>> >> #SBATCH --output=gpu-%J.txt<br>
> >>> >> #SBATCH --account=lnicotra<br>
> >>> >> module load cuda<br>
> >>> >> python gpu1<br>
> >>> >><br>
> >>> >> Where gpu1 is a GPU test script that runs correctly<br>
> >>> while invoked via python. Tiger_1 partition has servers<br>
> >>> with GPUs, with a mix of 1080GTX and K20 as<br>
> specified in<br>
> >>> slurm.conf<br>
> >>> >><br>
> >>> >> I have defined GRES resources in the slurm.conf<br>
> file:<br>
> >>> >> # GPU GRES<br>
> >>> >> GresTypes=gpu<br>
> >>> >> NodeName=tiger[01,05,10,15,20] Gres=gpu:1080gtx:2<br>
> >>> >> NodeName=tiger[02-04,06-09,11-14,16-19,21-22]<br>
> >>> Gres=gpu:k20:2<br>
> >>> >><br>
> >>> >> And have a local gres.conf on the servers containing<br>
> >>> GPUs...<br>
> >>> >> lnicotra@tiger11 ~# cat /etc/slurm/gres.conf<br>
> >>> >> # GPU Definitions<br>
> >>> >> # NodeName=tiger[02-04,06-09,11-14,16-19,21-22]<br>
> >>> Name=gpu Type=K20 File=/dev/nvidia[0-1]<br>
> >>> >> Name=gpu Type=K20 File=/dev/nvidia[0-1] Cores=0,1<br>
> >>> >><br>
> >>> >> and a similar one for the 1080GTX<br>
> >>> >> # GPU Definitions<br>
> >>> >> # NodeName=tiger[01,05,10,15,20] Name=gpu<br>
> Type=1080GTX<br>
> >>> File=/dev/nvidia[0-1]<br>
> >>> >> Name=gpu Type=1080GTX File=/dev/nvidia[0-1]<br>
> Cores=0,1<br>
> >>> >><br>
> >>> >> The account manager seems to know about the GPUs...<br>
> >>> >> lnicotra@tiger11 ~# sacctmgr show tres<br>
> >>> >> Type Name ID<br>
> >>> >> -------- --------------- ------<br>
> >>> >> cpu 1<br>
> >>> >> mem 2<br>
> >>> >> energy 3<br>
> >>> >> node 4<br>
> >>> >> billing 5<br>
> >>> >> fs disk 6<br>
> >>> >> vmem 7<br>
> >>> >> pages 8<br>
> >>> >> gres gpu 1001<br>
> >>> >> gres gpu:k20 1002<br>
> >>> >> gres gpu:1080gtx 1003<br>
> >>> >><br>
> >>> >> Can anyone point out what am I missing?<br>
> >>> >><br>
> >>> >> Thanks!<br>
> >>> >> Lou<br>
> >>> >><br>
> >>> >><br>
> >>> >> --<br>
> >>> >><br>
> >>> >> Lou Nicotra<br>
> >>> >><br>
> >>> >> IT Systems Engineer - SLT<br>
> >>> >><br>
> >>> >> Interactions LLC<br>
> >>> >><br>
> >>> >> o: 908-673-1833<br>
> >>> >><br>
> >>> >> m: 908-451-6983<br>
> >>> >><br>
> >>> >> <a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>><br>
> >>> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>>><br>
> >>> >><br>
> >>> >> <a href="http://www.interactions.com" rel="noreferrer" target="_blank">www.interactions.com</a><br>
> <<a href="http://www.interactions.com" rel="noreferrer" target="_blank">http://www.interactions.com</a>> <<a href="http://www.interactions.com" rel="noreferrer" target="_blank">http://www.interactions.com</a>><br>
> >>> >><br>
> >>> >><br>
> >>> <br>
> *******************************************************************************<br>
> >>> >><br>
> >>> >> This e-mail and any of its attachments may contain<br>
> >>> Interactions LLC proprietary information, which is<br>
> >>> privileged, confidential, or subject to copyright<br>
> >>> belonging to the Interactions LLC. This e-mail is<br>
> >>> intended solely for the use of the individual or entity<br>
> >>> to which it is addressed. If you are not the intended<br>
> >>> recipient of this e-mail, you are hereby notified that<br>
> >>> any dissemination, distribution, copying, or action<br>
> taken<br>
> >>> in relation to the contents of and attachments to this<br>
> >>> e-mail is strictly prohibited and may be unlawful.<br>
> If you<br>
> >>> have received this e-mail in error, please notify the<br>
> >>> sender immediately and permanently delete the original<br>
> >>> and any copy of this e-mail and any printout. Thank<br>
> You.<br>
> >>> >><br>
> >>> >><br>
> >>> <br>
> *******************************************************************************<br>
> >>> >><br>
> >>> >><br>
> >>> ><br>
> >>> ><br>
> >>> > --<br>
> >>> ><br>
> >>> > Lou Nicotra<br>
> >>> ><br>
> >>> > IT Systems Engineer - SLT<br>
> >>> ><br>
> >>> > Interactions LLC<br>
> >>> ><br>
> >>> > o: 908-673-1833<br>
> >>> ><br>
> >>> > m: 908-451-6983<br>
> >>> ><br>
> >>> > <a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>><br>
> >>> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>>><br>
> >>> ><br>
> >>> > <a href="http://www.interactions.com" rel="noreferrer" target="_blank">www.interactions.com</a><br>
> <<a href="http://www.interactions.com" rel="noreferrer" target="_blank">http://www.interactions.com</a>> <<a href="http://www.interactions.com" rel="noreferrer" target="_blank">http://www.interactions.com</a>><br>
> >>> ><br>
> >>> ><br>
> >>> <br>
> *******************************************************************************<br>
> >>> ><br>
> >>> > This e-mail and any of its attachments may contain<br>
> >>> Interactions LLC proprietary information, which is<br>
> >>> privileged, confidential, or subject to copyright<br>
> >>> belonging to the Interactions LLC. This e-mail is<br>
> >>> intended solely for the use of the individual or entity<br>
> >>> to which it is addressed. If you are not the intended<br>
> >>> recipient of this e-mail, you are hereby notified that<br>
> >>> any dissemination, distribution, copying, or action<br>
> taken<br>
> >>> in relation to the contents of and attachments to this<br>
> >>> e-mail is strictly prohibited and may be unlawful.<br>
> If you<br>
> >>> have received this e-mail in error, please notify the<br>
> >>> sender immediately and permanently delete the original<br>
> >>> and any copy of this e-mail and any printout. Thank<br>
> You.<br>
> >>> ><br>
> >>> ><br>
> >>> <br>
> *******************************************************************************<br>
> >>><br>
> >>><br>
> >>><br>
> >>> --<br>
> >>><br>
> >>> *Lou Nicotra*<br>
> >>><br>
> >>> IT Systems Engineer - SLT<br>
> >>><br>
> >>> Interactions LLC<br>
> >>><br>
> >>> o: 908-673-1833 <tel:781-405-5114><br>
> >>><br>
> >>> m: 908-451-6983 <tel:781-405-5114><br>
> >>><br>
> >>> _<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>>>_<br>
> >>><br>
> >>> <a href="http://www.interactions.com" rel="noreferrer" target="_blank">www.interactions.com</a> <<a href="http://www.interactions.com" rel="noreferrer" target="_blank">http://www.interactions.com</a>><br>
> <<a href="http://www.interactions.com/" rel="noreferrer" target="_blank">http://www.interactions.com/</a>><br>
> >>><br>
> >>><br>
> >>><br>
> >>> --<br>
> >>><br>
> >>> *Lou Nicotra*<br>
> >>><br>
> >>> IT Systems Engineer - SLT<br>
> >>><br>
> >>> Interactions LLC<br>
> >>><br>
> >>> o: 908-673-1833 <tel:781-405-5114><br>
> >>><br>
> >>> m: 908-451-6983 <tel:781-405-5114><br>
> >>><br>
> >>> _<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>>>_<br>
> >>><br>
> >>> <a href="http://www.interactions.com" rel="noreferrer" target="_blank">www.interactions.com</a> <<a href="http://www.interactions.com" rel="noreferrer" target="_blank">http://www.interactions.com</a>><br>
> <<a href="http://www.interactions.com/" rel="noreferrer" target="_blank">http://www.interactions.com/</a>><br>
> >>><br>
> >>> <br>
> *******************************************************************************<br>
> >>><br>
> >>> This e-mail and any of its attachments may contain Interactions<br>
> >>> LLC proprietary information, which is privileged, confidential,<br>
> >>> or subject to copyright belonging to the Interactions LLC. This<br>
> >>> e-mail is intended solely for the use of the individual or<br>
> entity<br>
> >>> to which it is addressed. If you are not the intended recipient<br>
> >>> of this e-mail, you are hereby notified that any dissemination,<br>
> >>> distribution, copying, or action taken in relation to the<br>
> >>> contents of and attachments to this e-mail is strictly<br>
> prohibited<br>
> >>> and may be unlawful. If you have received this e-mail in error,<br>
> >>> please notify the sender immediately and permanently delete the<br>
> >>> original and any copy of this e-mail and any printout.<br>
> Thank You.<br>
> >>><br>
> >>> <br>
> *******************************************************************************<br>
> >>><br>
> >><br>
> >><br>
> >><br>
> >> --<br>
> >><br>
> >> *Lou Nicotra*<br>
> >><br>
> >> IT Systems Engineer - SLT<br>
> >><br>
> >> Interactions LLC<br>
> >><br>
> >> o: 908-673-1833 <tel:781-405-5114><br>
> >><br>
> >> m: 908-451-6983 <tel:781-405-5114><br>
> >><br>
> >> _<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>><br>
> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>>>_<br>
> >><br>
> >> <a href="http://www.interactions.com" rel="noreferrer" target="_blank">www.interactions.com</a> <<a href="http://www.interactions.com" rel="noreferrer" target="_blank">http://www.interactions.com</a>><br>
> <<a href="http://www.interactions.com/" rel="noreferrer" target="_blank">http://www.interactions.com/</a>><br>
> >><br>
> >><br>
> *******************************************************************************<br>
> >><br>
> >> This e-mail and any of its attachments may contain Interactions LLC<br>
> >> proprietary information, which is privileged, confidential, or<br>
> subject<br>
> >> to copyright belonging to the Interactions LLC. This e-mail is<br>
> >> intended solely for the use of the individual or entity to which<br>
> it is<br>
> >> addressed. If you are not the intended recipient of this e-mail,<br>
> you<br>
> >> are hereby notified that any dissemination, distribution,<br>
> copying, or<br>
> >> action taken in relation to the contents of and attachments to this<br>
> >> e-mail is strictly prohibited and may be unlawful. If you have<br>
> >> received this e-mail in error, please notify the sender immediately<br>
> >> and permanently delete the original and any copy of this e-mail and<br>
> >> any printout. Thank You.<br>
> >><br>
> >><br>
> *******************************************************************************<br>
> >><br>
> ><br>
> <br>
> <br>
> <br>
> -- <br>
> <br>
> *Lou Nicotra*<br>
> <br>
> IT Systems Engineer - SLT<br>
> <br>
> Interactions LLC<br>
> <br>
> o: 908-673-1833 <tel:781-405-5114><br>
> <br>
> m: 908-451-6983 <tel:781-405-5114><br>
> <br>
> _<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a> <mailto:<a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a>>_<br>
> <br>
> <a href="http://www.interactions.com" rel="noreferrer" target="_blank">www.interactions.com</a> <<a href="http://www.interactions.com/" rel="noreferrer" target="_blank">http://www.interactions.com/</a>><br>
> <br>
> *******************************************************************************<br>
> <br>
> This e-mail and any of its attachments may contain Interactions LLC <br>
> proprietary information, which is privileged, confidential, or subject <br>
> to copyright belonging to the Interactions LLC. This e-mail is intended <br>
> solely for the use of the individual or entity to which it is addressed. <br>
> If you are not the intended recipient of this e-mail, you are hereby <br>
> notified that any dissemination, distribution, copying, or action taken <br>
> in relation to the contents of and attachments to this e-mail is <br>
> strictly prohibited and may be unlawful. If you have received this <br>
> e-mail in error, please notify the sender immediately and permanently <br>
> delete the original and any copy of this e-mail and any printout. Thank <br>
> You.<br>
> <br>
> *******************************************************************************<br>
> <br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><p style="margin-bottom:0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-repeat:initial"><b><span style="font-size:9.5pt;font-family:"Arial",sans-serif;color:#6fa8dc">Lou Nicotra</span></b><span style="font-size:9.5pt;font-family:Arial,sans-serif"></span></p>
<p style="margin-bottom:0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-repeat:initial"><span style="font-size:9.5pt;font-family:"Arial",sans-serif;color:#666666">IT Systems Engineer -
SLT</span><span style="font-size:9.5pt;font-family:Arial,sans-serif"></span></p>
<p style="margin-bottom:0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-repeat:initial"><span style="font-size:9.5pt;font-family:"Arial",sans-serif;color:#666666">Interactions LLC</span></p>
<p style="margin-bottom:0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-repeat:initial"><span style="font-size:9.5pt;font-family:Arial,sans-serif">o: </span><span style="font-size:9.5pt;font-family:"Arial",sans-serif;color:#666666"><a href="tel:781-405-5114" target="_blank"><span style="color:#1155cc">908-673-1833</span></a></span><span style="font-size:9.5pt;font-family:Arial,sans-serif"></span></p>
<p style="margin-bottom:0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-repeat:initial"><span style="font-size:9.5pt;font-family:"Arial",sans-serif;color:#666666">m: <a href="tel:781-405-5114" target="_blank"><span style="color:#1155cc">908-451-6983</span></a></span><span style="font-size:9.5pt;font-family:Arial,sans-serif"></span></p>
<p style="margin-bottom:0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-repeat:initial"><u><span style="font-size:9.5pt;font-family:"Arial",sans-serif;color:#1155cc"><a href="mailto:lnicotra@interactions.com" target="_blank">lnicotra@interactions.com</a></span></u><span style="font-size:9.5pt;font-family:Arial,sans-serif"></span></p>
<span style="font-size:9.5pt;line-height:107%;font-family:"Arial",sans-serif;color:#666666"><a href="http://www.interactions.com/" target="_blank"><span style="color:#1155cc">www.interactions.com</span></a></span><br></div></div>
<br>
<font face="Times New Roman" size="3">
</font><p style="margin:0in 0in 8pt"><font face="Calibri" size="3">******************************<wbr>******************************<wbr>*******************</font></p><font face="Times New Roman" size="3">
</font><p style="margin:0in 0in 8pt"><font face="Calibri" size="3">This e-mail and any of its attachments may contain
Interactions LLC proprietary information, which is privileged,
confidential, or subject to copyright belonging to the Interactions
LLC. This e-mail is intended solely for the use of the individual or
entity to which it is addressed. If you are not the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution, copying,
or action taken in relation to the contents of and attachments to this e-mail
is strictly prohibited and may be unlawful. If you have received this e-mail in
error, please notify the sender immediately and permanently delete the original
and any copy of this e-mail and any printout. Thank You. </font></p><font face="Times New Roman" size="3">
</font><p style="margin:0in 0in 8pt"><font face="Calibri"><font size="3">******************************<wbr>******************************<wbr>*******************<span> </span></font></font></p><font face="Times New Roman" size="3">
</font>