Past events

  • 10:15 AM
    Geoconvention 2015, Telus Convention Centre, Rm Te
    lus 101. Well logging is the process of making physical measurements down
    bore holes in order to characterize geological and structural properties. L
    ogs are visually interpreted and correlated to classify regions that are si
    milar in structure, a process that can be modelled with machine learning.
    This project applies supervised learning methods to labelled well logs fromthe Trenton Black River data set in order to classify major stratigraphic
    units. Spectral co-occurance coefficients were used for feature extraction
    , and a k-nearest-neighbours approach was used for classification. This nov
    el approach was applied to real field data in a high-impact domain, yieldi
    ng promising results for future research.

  • 10:40 AM Geoconvention 2015 Telus Convention Centre Room Telu
    s 101. Common image gathers are used in building velocity models, inverti
    ng anisotropy parameters, and analyzing reservoir attributes. Often primar
    y reflections are used to form image gathers and multiples are typically at
    tenuated in processing to remove strong coherent artifacts generated by mul
    tiples that interfere with the imaged reflectors. However, researchers hav
    e shown that, if cor- rectly used, multiples can actually provide extra i
    llumination of the subsurface in seismic imaging, especially for delineati
    ng the near-surface features. In this work, we borrow ideas from literatur
    es on imaging with surface-related multiples, and apply these ideas to ext
    ended imaging. This way we save the massive computation cost in separating
    multiples from the data before using them during the formation of image gat
    hers. Also, we mitigate the strong coherent artifacts generated by multipl
    es which can send the migration velocity analysis type algorithms in wrong
    direction. Synthetic examples on a three-layer model show the efficacy of t
    he proposed formulation.

  • Seminar: Eran Treister (UBC), ESB 5104 , Thursday, April 2, 2015 - 19:30 - 20:30
  • Abstract:
    When numerically solving a PDE in three dimensions
    , it is often necessary to generate a mesh on which to discretize the solut
    ion. Often this can be expensive to do. However, by using ideas from optim
    al transport it is possible both to construct a mesh quickly and cheaply,
    and also to prove that it has the necessary regularity properties to allow
    an accurate approximation of the solution of the PDE. In this talk I will d
    escribe these methods, prove results about their regularity and then applythem to some problems in meteorology.

  • Abstract: As in several other disciplines, progress in explora
    tion seismology relies on the collection and processing of massive data vol
    umes that grow exponentially in size as the survey area and desired resolut
    ion increase. This exponential growth --- in combination with the increasedcomplexity of the next-generation of iterative wave-equation based inversi
    on algorithms --- puts strains on our acquisition systems and computationalback ends impeding progress in our field. During this talk, I will reviewhow recent randomized algorithms from Compressive Sensing and Machine lear
    ning can be used to overcome some of these challenges by fundamentally reth
    inking how we ample and process seismic data. The key idea here is to redu
    ce acquisition and computational costs by deliberately working on small ran
    domized subsets of the data at a desired accuracy. I will illustrate these
    concepts using a variety of compelling examples on realistic synthetics andfield data.

  • ESB 5104, 2207 Main Mall. Abstract: Famously, United States S
    ecretary of Defense Donald Rumsfeld in February 2002 made the following sta
    tement in response to the lack of evidence linking the government of Iraq w
    ith weapons of mass destruction: “…as we know, there are known knowns; th
    ere are things that we know that we know. We also know there are known unkn
    owns; that is to say we know there are some things we do not know. But the
    re are also unknown unknowns, the ones we don’t know we don’t know.”
    We g
    eophysicists and geologists are generally not eager to compare ourselves wi
    th politicians. We are, after all, scientists who come to an understandin
    g of nature from evidence of what is true rather from what we want to be tr
    ue. But in this case Rumsfield’s efforts to infer definite conclusions abou
    t the existence of weapons of mass destruction from a mass of evidence thatwas largely inaccurate, insufficient and inconsistent shares an uncanny s
    imilarity to the job that geophysicists do when processing and interpretingseismic exploration data, especially in land scenarios where seismograms
    typically contain more noise than signal. In both the political and geophys
    ical situations, definite conclusions need to be made despite the lack of
    hard and fast evidence. Furthermore, in both cases the lack of definitive
    evidence is no reason for not clinging strongly to belief in the truth of t
    he conclusions.
    Land seismic exploration can be a frustratingly inaccuratescience. The seismic data is acquired with sources and receivers on the su
    rface of the earth so the irregular, inhomogeneous, unconsolidated near-s
    urface layers distort the wavefields going down from the sources as well asthe reflected wavefields coming up to the receivers. The near-surface of t
    he earth has a serious blurring effect on the image of the targets at depththat we really want to resolve clearly. This we know. This is a known know
    n.
    The seismic processor’s job is to remove the unknown effects of static
    s, scaling and waveform distortions from the millions of seismograms that
    typically comprise each seismic dataset. Decades-old surface-consistent met
    hods allow the processor to turn these unknowns into knowns.
    However, th
    e assumptions built into these methods are highly simplistic and in a stric
    t sense are known to be false. As scientists, we need to be skeptical abo
    ut our results in order not to fall into traps of irrational thinking. For
    example, we should not conclude that a surface-consistent solution is corr
    ect just because it makes the data look better. There are many ways to makethe data look better for the wrong reasons. So instead of being proud of o
    ur accomplishments, perhaps we are better off being skeptical that our acc
    omplishments are as great as we think they are. There can always be unknownunknowns.
    There is a fourth category that Rumsfield did not mention: unkn
    own knowns. We all have biases against evidence that comes in conflict withour previously held beliefs. This is a natural tendency that protects us f
    rom the charlatans out there who try to sell us falsehoods. How, for examp
    le, can we possibly get more frequency bandwidth out of the earth than wha
    t we put in? Too much skepticism, however, can also trap us in our own fa
    lsehoods: sometimes we think we know but actually we don’t know.
    Using ex
    amples from land exploration seismology I will attempt to explain how simpl
    e concepts can provide surprising challenges to how we think and test our i
    ntegrity as scientists. The concepts are general enough to be of interest o
    f any geoscientist, regardless of her background.

    Peter Cary has B.S
    c. and M.Sc. degrees in physics, a B.A. degree in philosophy from the Univ
    ersity of Toronto, and a Ph.D. in geophysics (1987) from Cambridge Univers
    ity, England. He worked for Chevron both in Calgary and in La Habra, Cali
    fornia from 1982 to 1984 and was Manager of Geophysical Research with Pulso
    nic Geophysical Ltd. from 1988 to 1996 and Chief Geophysicist with Sensor G
    eophysical Ltd. 1996 to 2011. He is presently Chief Geophysicist, Processi
    ng with Arcis Seismic Solutions, TGS. He has presented and published manypapers on seismic processing, and served as technical program chairman ofthe SEG 2000 Annual Meeting and of the 1993 CSEG Annual Meeting. He servedas CSEG president in 2004-05 and was 2nd V.P. of the CSEG in 1996-97. He
    was an associate editor (seismic processing) of Geophysics from 1998-2001.
    One of his specialities is processing and writing software for multicompone
    nt seismic data.

  • https://sites.google.com/site/saravkin/

    Abstract:
    Many sci
    entific computing applications can be formulated as large-scale optimizatio
    n problems, including inverse problems, medical and seismic imaging, cla
    ssification in machine learning, data assimilation in weather prediction,and sparse difference graphs. While first-order methods have proven widelysuccessful in recent years, recent developments suggest that matrix-free
    second-order methods, such as interior-point methods, can be competitive.

    This talk has three parts. We first develop a modeling framework for awide range of problems, and show how conjugate representations can be exp
    loited to design a uniform interior point approach for this class. We then
    show a range of applications, focusing on modeling and special problem str
    ucture. Finally, we preview some recent work, which suggests that the con
    jugate representations admit very efficient matrix free methods in importan
    t special cases, and present some recent results for large scale extension
    s.

  • ESB 4133, 2207 Main Mall
    http://www.iam.ubc.ca/direct-solvers
    -sparse-matrices-introduction-applications-and-supercomputing

    We will re
    view the state-of-the art techniques in the parallel direct solution of lin
    ear systems of equations and present several recent new research directions
    . This includes (i) fast methods for evaluating certain selected elements o
    f a matrix function that can be used for solving the Kohn-Sham-equation wit
    hout explicit diagonalization and (ii) stochastic optimization problems und
    er uncertainty from power grid problems from electrical power grid systems.Several algorithmic and performance engineering advances are discussed to
    sove the underlying sparse linear algebra problems. The new developments in
    clude novel incomplete augmented multicore sparse factorizations, multicor
    e- and GPU-based dense matrix implementations, and communication-avoiding
    Krylov solvers. We also improve the interprocess communication on Cray syst
    ems to solve e.g. 24-hour horizon power grid problems from electrical powergrid systems of realistic size with up to 1.95 billion decision variables
    and 1.94 billion constraints. Full-scale results are reported on Cray XC30
    and BG/Q, where we observe very good parallel efficiencies and solution ti
    mes within a operationally defined time interval. To our knowledge, "real-
    time"-compatible performance on a broad range of architectures for this cla
    ss of problems has not been possible prior to present work.

  • Location: https://goo.gl/maps/OAo3M

    URL for Speaker:
    http:
    //www.ics.inf.usi.ch/

    Institute of Computational Science, Universita de
    lla Svizzera Italiana

    About Olaf Schenk: The research of Olaf Schenk con
    cerns algorithmic and architectural problems in the field of computational
    mathematics, scientific computing and high-performance computing. The rese
    arch has a strong emphasis on applications in computational science. From amathematical and computer science perspective, this field requires a clos
    e interaction of numerical methods such as numerical linear algebra, nonli
    near optimization and PDEs. In addition, high-performance information tech
    nology also plays an important role to get an insight into realistic applic
    ations. Our group possesses expertise in the design and analysis of paralle
    l and multi- and manycore algorithms for real-world applications on emergin
    g architectures. To this end, our research reconnects several relevant sub
    fields of computer science with the needs of Computational Science and High
    -Performance Computing. Typically, our group will drive research towards e
    xtreme-scale computing in computational algorithms, application software,programming, and software tools.