Appropriate resolution for regression analysis

I’m using ArcMap 10.3. I’m a student working on a research problem where I’m taking Landsat imagery (at 30-meters resolution) and regressing it against other datasets captured at different scales (up to about 1 km^2). I am then resampling everything down to 30 square meters prior to running the regression.

My ultimate goal is to understand vegetation dynamics, which is why I wanted to bring the whole analysis down to Landsat’s 30 square meters. Or is this thinking incorrect, ie, should I run the analysis at the scale of the coarsest dataset, being 1 km^2?

From what I’ve read- in particular, Hengl 2006 “Finding the right pixel size”, there is no “ideal” grid resolution and it is dependent on the question. Another paper I read, Tarnavsky 2007 “Multiscale geostatistical analysis of AVHRR, SPOT-VGT, and MODIS global NDVI products” recommended choosing a pixel size to minimize the spatial variability between sensors. Yet other papers suggest statistical corrections in order to use multiscale data.

I can do the regression. My question is whether there are any standards for how to approach an analysis where all the input data is at a different scale, in my case, ranging from 30 square meters up to 1 km^2.

Mediation analysis in spss when there are two mediators having correlation with each other?

i am conducting a research, there are two mediators that are parallel to each other, the point is when i run correlation analysis, these two mediators also show correlation with each other, is there any method for conducting mediation analysis in spss that could consider this correlation as well?

Nearest Neighbor Analysis Results

I’m fairly new and entirely self-taught in using QGIS 3.4. I’m looking at turtle nest sites and trying to run a nearest neighbor analysis to analyze their distribution and establish whether or not they are clumped, dispersed or random. I’m doing this for a number of different years to determine whether nest sites are becoming more clumped as a result of environmental changes.

However, i’m not sure i understand the results that are being output. They seem to vary wildly from year to year. much more than i would expect and the z scores are just astronomical. How is QGIS establishing expected distances?

Maybe i’m just doing something fundamentally wrong, but i’m selecting the vector analysis -> Nearest Neighbor Analysis -> and selecting my vector layer containing the nest site points. These are some of the outputs I’m getting for different years:

Observed mean distance: 29.38250622749374
Expected mean distance: 0.0007898809280688575
Nearest neighbour index: 37198.65258594057
Number of points: 98
Z-Score: 704465.0547680564

Observed mean distance: 391.4397673148066
Expected mean distance: 0.008755714190591082
Nearest neighbour index: 44706.777630481476
Number of points: 136
Z-Score: 997387.6598752242

Even the observed mean distances seem way off. Maybe i’m just doing something fundamentally wrong? Or maybe theres just a better way of achieving what i’m trying to do. I’m not sure. Any help would be appreciated!

Analysis method for two-way slab floor with central column

What method would be appropriate to analyze this concrete structure?
A two-way concrete slab with walls and central column.

I would like to understand how to model it with either yield line method or equivalent frame method. Or both to be honest.

I’m studying those methods at introductory level, but haven’t found any example of a similar case, so unsure how to draw out the analytical models.

enter image description here

canonical correlation analysis and dummy variables in R

I have a excel table with the following collum (among others):
begin{align}
&text{Contraceptive}\
&1\
&1\
&2\
&3\
&vdots
end{align}

where $1=text{no use}$, $2=text{short term use}$, $3=text{long term use}$

I wanted to use CCA between the contraceptive collum and another collum (religious) contaning a binary variable ($0=$ yes and $1=$ no).

From my understanding we have to create three columns:

begin{align}
&text{no use} ::: text{short} ::: text{long}\
&1 hspace{28pt} 0 hspace{28pt} 0 \
&1 hspace{28pt} 0 hspace{28pt} 0 \
&0 hspace{28pt} 1 hspace{28pt} 0\
&0 hspace{28pt} 0 hspace{28pt} 1\
&vdots
end{align}

where $0=$ yes and $1=$ no.

Is this the right procedure for CCA? How we do this dummy’s variables in R?

E.coli Sequencing & Analysis

I have been given the task of assembling a ‘new’ Ecoli genome and analysing the genes present etc.

The Ecoli is a new strain, and has been taken and run on a Nextseq 500 in high-output mode with 150bp paired end reads. The ‘raw’ files that I have is the forward and reverse reads.

I have initially QC checked the ‘raw’ files, and subsequently run them through trim galore and checked the QC after that.

For the next step, I now need to assemble my genome. I have been told that SPades will run a ‘de novo’ assembly for me, and then put that assembly into Prokka for Gene annotation.

Is this the best way to assemble the genome and annotate it? Or should I use another method? I am thinking that I should use a ‘mapping’ technique to assemble the genome using the Ecoli O157:H7 genome as a reference, but I have no idea how to do this. I would say that I am at an intermediate level with unix, but by no means am I a bioinformatician. Some help and guidance would be greatly appreciated!

Thanks,

Alex.

Two different approaches to repeated measure data analysis

My research is intended to see the difference in people’s responses to sad vs. joyful faces. I have gathered 10 faces, 5 sad and 5 joyful, and I have recruited 100 participants for my research.

I requested each participant to respond to each face, whose order of presentation was randomized for each participant. So now I have the dataset as follows :

Participant     face    observation
          1     sad1             x1
          1     sad2             x2
                 ...          
          1     sad5             x5
          1     joy1             x6
          1     joy2             x7
                 ...          
          1     joy5            x10
          2     sad1            x11
          2     sad2            x12
                 ...            
          2     joy1            x16
          2     joy2            x17
                 ...            
        100     sad4           x999
        100     sad5          x1000

where $x_i (i=1, cdots, 1000)$ are some observed values.


Approach 1 : Repeated measure ANOVA

I just used the above data structure, and conducted repeated measure ANOVA. In R,

aov(observation ~ face + Error(Participant/face), Data)

Approach 2 : Comparison of “mean”s

I have converted the above data into the following :

Participant     face               mean_values
          1      sad          mean(x1, ... x5)
          1      joy         mean(x6, ... x10)
          2      sad        mean(x11, ... x15)
          2      joy        mean(x16, ... x20)
                 ...          
        100      sad      mean(x991, ... x995)
        100      joy     mean(x996, ... x1000)

And I have conducted the test with predictor face and response mean_values. In R,

glm(mean_values ~ face, Data, family="Gaussian")

What is the difference between the two approaches? What is more “appropriate” approach to my data?

Finite Element Analysis for Laminated Plates with Holes or Patches

As the title says, I am trying to code in FEM a plate structure that either has a hole in one of the layers or one of the layers is made of patches of plates, rather than one whole plate. However, while I have a slight idea of how to implement this, I’m really not very sure and I would like some questions answered before I start trudging through my code haphazardly.

From what I understand of FEM, I understand that in order to represent a hole or a patch (which is basically a plate with a hole AROUND it) is to change the connectivity matrix for a given layer. However, I don’t quite get how I would go about doing that. For example, if I had a 6×6 element plate structure and one of the layers of that plate was to have a 2×2 element hole in the middle, do I simply remove those elements from my connectivity vector? I just have a feeling that would cause some of my matrix dimensions to not match and throw the whole thing into chaos.

Additionally, what if the location/dimensions of my hole or plate didn’t align perfectly with the coordinates of my elements? how do I implement that? Would I have to write a completely new coordinate system for my nodes?

I’m sorry if my questions seem either too obvious or too vague. I just wanted to know if I’m heading in the right direction before investing in any major change to my code. If any of you could point me in the right direction or provide some good practice/sample pieces of code that would be great. I’m learning FEM basically through trial and error.