List of 321 high confidence SCZ-associated genes from Wang et al. 2018

Q:
I read your excellent work in Wang et al. 2018, and am wondering whether you could kindly share the list of 321 high confidence SCZ-associated genes. We are studying SCZ iPSC-derived interneurons and this information would be helpful for us to understand which DE gene may be causal in our system.

A:
It should be at: http://resource.psychencode.org

Request for data from Zhang and Gerstein NAR (2003)

Q:
I recently came across your paper, "Patterns of nucleotide substitution, insertion and deletion in the human genome inferred from pseudogenes."

I’m interested in the substitution rates in human pseudogenes. Figure 2A from your paper (pasted below) plots these rates. Would you be able to send me these rates as a table?

Additionally, has your group calculated the substitution rates for more families of pseudogenes? (The NAR 2003 paper only analyzed ribosomal protein pseudogenes sequences.) I tried poking around psiDR, but wasn’t not able to find this type of information readily available.

These substitution rate matrices would be very helpful for my research.

A:
see: http://www.pseudogene.org/indel-nar
(via http://papers.gersteinlab.org/papers/indel-nar)

Full set of tQTLs and isoQTLs from Wang et al. 2018

Q:
As a lab, our general interests lie in the intersection between transcriptomics, neurogenetics, and genetic diagnosis. As such, we have made great use of the publicly available PEC resources on https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Fresource.psychencode.org%2F&data=02%7C01%7Cshuang.liu%40yale.edu%7Caa9d9436ceb6478ec71208d8142dea62%7Cdd8cbebb21394df8b4114e3e87abeb5c%7C0%7C0%7C637281534720419443&sdata=eu4DyEY%2BNUJuueEbj4YWFeWfOYoao6j%2B%2F1rqyq1DSUc%3D&reserved=0, in particular the QTL data. However, I have not been able to locate the full set of isoQTLs and tQTLs without any p-value/FDR filtering, as is available for eQTLs. Is there somewhere I can access this easily? Or does access to the full set of tQTLs and isoQTLs require an application to Synapse?

A:
Currently we don’t provide access to the full set. The full set is very large and we need to discuss where we should share these data. I will let you know once we have any updates.

Questions regarding eqtl calls

Q:
I am trying to reproduce the eQTL calls published here with file name: Full_hg19_cis-eQTL. I’m having some difficulty reproducing the eQTL calls and in particular the P-values, and wanted to figure out where my pipeline isn’t matching.

1) I am unsure of the earth selection process on the super covariates sets. Currently, we try to reproduce the covariates selection using one hot matrix encoded covariates superset mentioned in the supplementary material (page 7) of this publication . We are curious on what covariates are selected (e.g.: brain bank covariates include multiple institutes, are all of them selected, or just some of them?).

2) We are unsure on which GTEx pipeline for EQTL calls were employed by the publication. We are currently using the GTEx pipeline mentioned here, but am wondering if the paper uses an older version of the GTEx pipeline that was previously available?

3) Another question is which datasets are fed into the eqtl calls? We are currently working with the capstone genotype datasets and TPM expression matrix published here with file name: DER-02_PEC_Gene_expression_matrix_TPM. We are wondering if the Genotype/Expression filtering were done directly on these files?

4) The last question is when we call eqtl using FastQTL, the nominal p-values (that have passed FDR < 0.05) are much larger compared to the p values your study published here with the file name: DER-08a_hg19_eQTL.significant (so it looks like we’re incredibly underpowered). I’ve attached a figure to illustrate the nominal p values reported in your files versus computed by us. We have used the Capstone genotypes and expression files (as described above), and though we should be somewhat underpowered relative to your study (because we are missing the GTEx genotypes/expression files, which need separate agreements), I’m not sure it accounts for the difference in p value magnitudes. I was wondering if you have any thoughts on which part of the pipelines we may have implemented incorrectly that could lead to such a huge difference?

A:
Here are some responses to your questions.

I am unsure of the earth selection process on the super covariates sets. Currently, we try to reproduce the covariates selection using one hot matrix encoded covariates superset mentioned in the supplementary material (page 7) of this publication . We are curious on what covariates are selected (e.g.: brain bank covariates include multiple institutes, are all of them selected, or just some of them?).
Here are the covariates we are using, you can also find the description in supplemental materials in our paper (http://papers.gersteinlab.org/papers/capstone4/index.html):

Top 3 genotyping principal components
Probabilistic Estimation of Expression Residuals (PEER) factors
Genotyping array platform
Gender
Disease status

We are unsure on which GTEx pipeline for EQTL calls were employed by the publication. We are currently using the GTEx pipeline mentioned here, but am wondering if the paper uses an older version of the GTEx pipeline that was previously available?
The detailed description of our eQTL pipeline could be found in Fig. S31 in our paper http://papers.gersteinlab.org/papers/capstone4/index.html.

Another question is which datasets are fed into the eqtl calls? We are currently working with the capstone genotype datasets and TPM expression matrix published here with file name: DER-02_PEC_Gene_expression_matrix_TPM. We are wondering if the Genotype/Expression filtering were done directly on these files?
You can find details in Fig. S31 in our paper http://papers.gersteinlab.org/papers/capstone4/index.html.

The last question is when we call eqtl using FastQTL, the nominal p-values (that have passed FDR < 0.05) are much larger compared to the p values your study published here with the file name: DER-08a_hg19_eQTL.significant (so it looks like we’re incredibly underpowered). I’ve attached a figure to illustrate the nominal p values reported in your files versus computed by us. We have used the Capstone genotypes and expression files (as described above), and though we should be somewhat underpowered relative to your study (because we are missing the GTEx genotypes/expression files, which need separate agreements), I’m not sure it accounts for the difference in p value magnitudes. I was wondering if you have any thoughts on which part of the pipelines we may have implemented incorrectly that could lead to such a huge difference?
I am not sure which genotype file you are using. But we cannot share the merged genotype file since we integrated some GTEx samples in the file. We are also using different covariates. So your results will be different from ours if the genotype, phenotype and covariates inputs are not the same.

Yale Morph Server job not being completed

Q:
I uploaded my PDB files on the multi-chain morph server, the job ID is b499364-832. It has been two days but the results page still says that the job is not yet complete. Is there any problem with my morph? I understand that my files are very large so it may take a long time to finish, but is it possible that it takes this long? I would really appreciate your assistance.

A:
Files generated by the multi-chain server can be found in
http://www.molmovdb.org/uploads/<job ID>/
Yours are in
http://www.molmovdb.org/uploads/b499364-832/

Looking at it, some files seem to be missing.
For e.g., a complete run would look like
http://www.molmovdb.org/uploads/b649592-16743/

There is a job running and it seems to be related to yours
"/usr/bin/perl ./multi.pl b499364-832 –chains=WCBAXHGFEDJLMNOPQRYAIS
–nframes=8 –email=seiga –engine=CNS –debug"

The short of it is that your job is probably stuck since you seem to
have submitted it 6 days ago judging from the submit time. Note that
we cannot guarantee the full functionality of this service as it is
from 2005 and has not been fully maintained since. Occasionally, we
may roll back but that would mean you will need to resubmit your job.